Connect with us

Tech

This week in AI: AI-powered personalities are all the rage

Published

on


Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

Last week during its annual Connect conference, Meta launched a host of new AI-powered chatbots across its messaging apps — WhatsApp, Messenger and Instagram DMs. Available for select users in the U.S., the bots are tuned to channel certain personalities and mimic celebrities including Kendall Jenner, Dwyane Wade, MrBeast, Paris Hilton, Charli D’Amelio and Snoop Dogg.

The bots are Meta’s latest bid to boost engagement across its family of platforms, particularly among a younger demographic. (According to a 2022 Pew Research Center survey, only about 32% of internet users aged 13 to 17 say that they ever use Facebook, an over-50% decline from the year prior.) But the AI-powered personalities are also a reflection of a broader trend: the growing popularity of “character-driven” AI.

Consider Character.AI, which offers customizable AI companions with distinct personalities, like Charli D’Amelio as a dance enthusiast or Chris Paul as a pro golfer. This summer, Character.AI’s mobile app pulled in over 1.7 million new installs in less than a week while its web app was topping 200 million visits per month. Character.AI claimed that, moreover, as of May, users were spending on average 29 minutes per visit — a figure that the company said eclipsed OpenAI’s ChatGPT by 300% as ChatGPT usage declined.

That virality attracted backers including Andreessen Horowitz, who poured well over $100 million in venture capital into Character.AI, which was last valued at $1 billion.

Elsewhere, there’s Replika, the controversial AI chatbot platform, which in March had around 2 million users — 250,000 of whom were paying subscribers.

That’s not to mention Inworld, another AI-driven character success story, which is developing a platform for creating more dynamic NPCs in video games and other interactive experiences. To date, Inworld hasn’t shared much in the way of usage metrics. But the promise of more expressive, organic characters, driven by AI, has landed Inworld investments from Disney and grants from Fortnite and Unreal Engine developer Epic Games.

So clearly, there’s something to AI-powered chatbots with personalities. But what is it?

I’d wager to say that chatbots like ChatGPT and Claude, while undeniably useful in decidedly professional contexts, don’t hold the same allure as “characters.” They’re not as interesting, frankly — and it’s no surprise. General-purpose chatbots were designed to complete specific tasks, not hold an elivening conversation.

But the question is, will AI-powered characters have staying power? Meta’s certainly hoping so, considering the resources it’s pouring into its new bot collection. I’m not sure myself — as with most tech, there’s a decent chance the novelty will wear off eventually. And, then it’ll be onto the next big thing — whatever that ends up being.

Here are some other AI stories of note from the past few days:

  • Spotify tests AI-generated playlists: References discovered in the Spotify app’s code indicate the company may be developing generative AI playlists users could create using prompts, Sarah reports.
  • How much are artists making from generative AI? Who knows? Some generative AI vendors, like Adobe, have established funds and revenue sharing agreements to pay artists and other contributors to the data sets used to train their generative AI models. But it’s not clear how much these artists can actually earn, TC learned.
  • Google expands AI-powered search: Google opened up its generative AI search experience to teenagers and introduced a new feature to add context to the content that users see, along with an update to help train the search experience’s AI model to better detect false or offensive queries.
  • Amazon launches Bedrock in GA, brings CodeWhisperer to the enterprise: Amazon announced the general availability of Bedrock, its managed platform that offers a choice of generative AI models from Amazon itself and third-party partners through an API. The company also launched an enterprise tier for CodeWhisperer, Amazon’s AI-powered service to generate and suggest code.
  • OpenAI entertains hardware: The Information reports that storied former Apple product designer Jony Ive is in talks with OpenAI CEO Sam Altman about a mysterious AI hardware project. In the meantime, OpenAI — which is planning to soon release a more powerful version of its GPT-4 model with image analysis capabilities — could see its secondary-market valuation soar to $90 billion.
  • ChatGPT gains a voice: In other OpenAI news, ChatGPT evolved into much more than a text-based search engine, with OpenAI announcing recently that it’s adding new voice and image-based smarts to the mix.
  • The writers’ strike and AI: After almost five months, the Writers Guild of America reached an agreement with Hollywood studios to end the writers’ strike. During the historic strike, AI emerged as a key point of contention between the writers and studios. Amanda breaks down the relevant new contract provisions.
  • Getty Images launches an image generator: Getty Images, one of the largest suppliers of stock images, editorial photos, videos and music, launched a generative AI art tool that it claims is “commercially safer” than other, rival solutions on the market. Prior to the launch of its own tool, Getty had been a vocal critic of generative AI products like Stable Diffusion, which was trained on a subset of its image content library.
  • Adobe brings gen AI to the web: Adobe officially launched Photoshop for the web for all users with paid plans. The web version, which was in beta for almost two years, is now available with Firefly-powered AI tools such as generative fill and generative expand.
  • Amazon to invest billions in Anthropic: Amazon has agreed to invest up to $4 billion in the AI startup Anthropic, the two firms said, as the e-commerce group steps up its rivalry against Microsoft, Meta, Google and Nvidia in the fast-growing AI sector.

More machine learnings

When I was talking with Anthropic CEO Dario Amodei about the capabilities of AI, he seemed to think there were no hard limits that we know of — not that there are none whatsoever, but that he had yet to encounter a (reasonable) problem that LLMs were unable to at least make a respectable effort at. Is it optimism or does he know of what he speaks? Only time will tell.

In the meantime, there’s still plenty of research going on. This project from the University of Edinburgh takes neural networks back to their roots: neurons. Not the complex, subtle neural complexes of humans, but the simpler (yet highly effective) ones of insects.

From the paper, a diagram showing views of the robot and some of its vision system data.

Ants and other small bugs are remarkably good at navigating complex environments, despite their more rudimentary vision and memory capabilities. The team built a digital network based on observed insect neural networks, and found that it was able to successfully navigate a small robot visually with very little in the way of resources. Systems in which power and size are particularly limited may be able to use the method in time. There’s always something to learn from nature!

Color science is another space where humans lead machines, more or less by definition: we are constantly striving to replicate what we see with better fidelity, but sometimes that fails in ways that in retrospect seem predictable. Skin tone, for example, is imperfectly captured by systems designed around light skin — especially when ML systems with biased training sets come into play. If an imaging system doesn’t understand skin color, it can’t expose and adjust the exposure and color properly.

Images from Sony research on more inclusive skin color estimation.

Sony is aiming to improve these systems with a new metric for skin color that more comprehensively but efficiently defines it using a color scale as well as perceived light/dark levels. In the process of doing this they showed that bias in existing systems extends not just to lightness but to skin hue as well.

Speaking of fixing photos, Google has a new technique almost certainly destined (in some refined form) for its Pixel devices, which are heavy on the computational photography. RealFill is a generative plug-in that can fill in an image with “what should have been there.” For instance, if your best shot of a birthday party happens to crop out the balloons, you give the system the good shot plus some others from the same scene. It figures out that there “should” be some balloons at the top of the strings and adds them in using information from the other pictures.

Image Credits: Google/

It’s far from perfect (they’re still hallucinations, just well informed hallucinations) but used judiciously it could be a really helpful tool. Is it still a “real” photo though? Well, let’s not get into that just now.

Lastly, machine learning models may prove more accurate than humans in predicting the number of aftershocks following a big earthquake. To be clear (as the researchers emphasize), this isn’t about “predicting” earthquakes, but characterizing them accurately when they happen so that you can tell whether that 5.8 is the type that leads to three more minor quakes within an hour, or only one more after 20 minutes. And the latest models are still only decent at it, under specific circumstances — but they are not wrong, and they can work thorough large amounts of data quickly. In time these models may help seismologists better predict quakes and aftershocks, but as the scientists note, it’s far more important to be prepared; after all, even knowing one is coming doesn’t stop it from happening.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Women in AI: Rashida Richardson, senior counsel at Mastercard focusing on AI and privacy

Published

on

By


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Rashida Richardson is senior counsel at Mastercard, where her purview lies with legal issues relating to privacy and data protection in addition to AI.

Formerly the director of policy research at the AI Now Institute, the research institute studying the social implications of AI, and a senior policy advisor for data and democracy at the White House Office of Science and Technology Policy, Richardson has been an assistant professor of law and political science at Northeastern University since 2021. There, she specializes in race and emerging technologies.

Rashida Richardson, senior counsel, AI at Mastercard

Briefly, how did you get your start in AI? What attracted you to the field?

My background is as a civil rights attorney, where I worked on a range of issues including privacy, surveillance, school desegregation, fair housing and criminal justice reform. While working on these issues, I witnessed the early stages of government adoption and experimentation with AI-based technologies. In some cases, the risks and concerns were apparent, and I helped lead a number of technology policy efforts in New York State and City to create greater oversight, evaluation or other safeguards. In other cases, I was inherently skeptical of the benefits or efficacy claims of AI-related solutions, especially those marketed to solve or mitigate structural issues like school desegregation or fair housing.

My prior experience also made me hyper-aware of existing policy and regulatory gaps. I quickly noticed that there were few people in the AI space with my background and experience, or offering the analysis and potential interventions I was developing in my policy advocacy and academic work. So I realized this was a field and space where I could make meaningful contributions and also build on my prior experience in unique ways.

I decided to focus both my legal practice and academic work on AI, specifically policy and legal issues concerning their development and use.

What work are you most proud of in the AI field?

I’m happy that the issue is finally receiving more attention from all stakeholders, but especially policymakers. There’s a long history in the United States of the law playing catch-up or never adequately addressing technology policy issues, and five-six years ago, it felt like that may be the fate of AI, because I remember engaging with policymakers, both in formal settings like U.S. Senate hearings or educational forums, and most policymakers treated the issue as arcane or something that didn’t require urgency despite the rapid adoption of AI across sectors. Yet, in the past year or so, there’s been a significant shift such that AI is a constant feature of public discourse and policymakers better appreciate the stakes and need for informed action. I also think stakeholders across all sectors, including industry, recognize that AI poses unique benefits and risks that may not be resolved through conventional practices, so there’s more acknowledgement — or at least appreciation — for policy interventions.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

As a Black woman, I’m used to being a minority in many spaces, and while the AI and tech industries are extremely homogeneous fields, they’re not novel or that different from other fields of immense power and wealth, like finance and the legal profession. So I think my prior work and lived experience helped prepare me for this industry, because I’m hyper-aware of preconceptions I may have to overcome and challenging dynamics I’ll likely encounter. I rely on my experience to navigate, because I have a unique background and perspective having worked on AI in all industries — academia, industry, government and civil society.

What are some issues AI users should be aware of?

Two key issues AI users should be aware of are: (1) greater comprehension of the capabilities and limitations of different AI applications and models, and (2) how there’s great uncertainty regarding the ability of current and prospective laws to resolve conflict or certain concerns regarding AI use.

On the first point, there’s an imbalance in public discourse and understanding regarding the benefits and potential of AI applications and their actual capabilities and limitations. This issue is compounded by the fact that AI users may not appreciate the difference between AI applications and models. Public awareness of AI grew with the release of ChatGPT and other commercially available generative AI systems, but those AI models are distinct from other types of AI models that consumers have engaged with for years, like recommendation systems. When the conversation about AI is muddled — where the technology is treated as monolithic — it tends to distort public understanding of what each type of application or model can actually do, and the risks associated with their limitations or shortcomings.

On the second point, law and policy regarding AI development and use is evolving. While there are a variety of laws (e.g. civil rights, consumer protection, competition, fair lending) that already apply to AI use, we’re in the early stages of seeing how these laws will be enforced and interpreted. We’re also in the early stages of policy development that’s specifically tailored for AI — but what I’ve noticed both from legal practice and my research is that there are areas that remain unresolved by this legal patchwork and will only be resolved when there’s more litigation involving AI development and use. Generally, I don’t think there’s great understanding of the current status of the law and AI, and how legal uncertainty regarding key issues like liability can mean that certain risks, harms and disputes may remain unsettled until years of litigation between businesses or between regulators and companies produce legal precedent that may provide some clarity.

What is the best way to responsibly build AI?

The challenge with building AI responsibly is that many of the underlying pillars of responsible AI, such as fairness and safety, are based on normative values — of which there are no shared definitions or understanding of these concepts. So one could presumably act responsibly and still cause harm, or one could act maliciously and rely on the fact that there are no shared norms of these concepts to claim good-faith action. Until there are global standards or some shared framework of what is meant to responsibly build AI, the best way one can pursue this goal is to have clear principles, policies, guidance and standards for responsible AI development and use that are enforced through internal oversight, benchmarking and other governance practices.

How can investors better push for responsible AI?

Investors can do a better job at defining or at least clarifying what constitutes responsible AI development or use, and taking action when AI actor’s practices do not align. Currently, “responsible” or “trustworthy” AI are effectively marketing terms because there are no clear standards to evaluate AI actor practices. While some nascent regulations like the EU AI Act will establish some governance and oversight requirements, there are still areas where AI actors can be incentivized by investors to develop better practices that center human values or societal good. However, if investors are unwilling to act when there is misalignment or evidence of bad actors, then there will be little incentive to adjust behavior or practices.



Source link

Continue Reading

Tech

House punts on AI with directionless new task force

Published

on

By


The House of Representatives has founded a Task Force on artificial intelligence that will “ensure America continues leading in this strategic area,” as Speaker Mike Johnson put it. But the announcement feels more like a punt after years of indecision that show no sign of ending.

In a way this task force — chaired by California Reps Ted Lieu and Jay Obernolte — is a welcome sign of Congress doing something, anything, on an important topic that has become the darling of tech investment. But in another, more important way, it comes off as lip service at a time many feel AI and tech are running circles around regulators and lawmakers.

Furthermore, the dispiriting partisanship and obstruction on display every day in Congress renders quaint any notion that this task force would produce anything of value at any time, let alone during a historically divisive election year.

“As new innovations in AI continue to emerge, Congress and our partners in federal government must keep up. House Republicans and Democrats will work together to create a comprehensive report detailing the regulatory standards and congressional actions needed to both protect consumers and foster continued investment and innovation in AI,” said Rep. Obernolte in the announcement.

And Rep. Lieu: “AI has the capability of changing our lives as we know it. The question is how to ensure AI benefits society instead of harming us. As a recovering Computer Science major, I know this will not be an easy or quick or one-time task, but I believe Congress has an essential role to play in the future of AI. I have been heartened to see so many Members of Congress of all political persuasions agree.”

Of course, the White House, numerous agencies, the EU and countless other authorities and organizations are already issuing “comprehensive reports” and recommending legislative actions, but what’s one more?

It seems as though Congress realized that it was the last substantive entity to act on this industry-reshaping force, and so representatives reached across the aisle to pat each other on the back for taking the smallest possible step toward future legislation.

But at the same time, with Congress dysfunctional (having passed a historically low number of bills) and all eyes on the 2024 presidential election, this task force is just a way of kicking the can down the road until they know what they can get away with under the coming administration.

Certainly studying AI and its risks and benefits is not a bad thing — but it’s a little late in the day to be announcing it. This task force is long overdue, and as such we may welcome it but also treat it with the same skepticism that lawmakers pandering deserves.

Everyone involved with this will point to it when asked why they haven’t acted on AI, which many voters fear is coming for their jobs or automating processes that once had a purposeful human touch. “But we started this task force!” Yes, and the EU has had their task force working on this subject since the pandemic days.

The announcement of the task force kept expectations low, with no timeline or deliverables that voters or watchdogs can hold them to. Even the report is something they will only “seek” to produce!

Furthermore, considering the expert agencies are at risk of declawing via Supreme Court decision, it is hard to even imagine what a regulatory structure would look like a year from now. Want the FTC, FCC, SEC, EPA or anyone else to help out? They may be judicially restrained from doing so come 2025.

Perhaps this task force is Congress’s admission that during such tumultuous times, and lacking any real insight into an issue, all they can do is say “we’ll look into it.”



Source link

Continue Reading

Tech

Bioptimus raises $35 million seed round to develop AI foundational model focused on biology

Published

on

By


There’s a new generative AI startup based in Paris. But what makes Bioptimus interesting is that it plans to apply everything we’ve collectively learned about AI models over the past few years with a narrow, exclusive focus on biology.

The reason why it makes to sense to create a startup focused exclusively on biology is that access to training data isn’t as simple in this field. While OpenAI is slowly moving away from web crawling in favor of licensing deals with content publishers, Bioptimus is facing different data challenges as it will have to deal with sensitive clinical data that isn’t publicly available at all.

And just like other AI startups, Bioptimus is going to be a capital intensive startup as it will train its models on expensive GPUs and hire talented researchers. That’s why the startup is raising a $35 million seed round led by Sofinnova Partners. Bpifrance’s Large Venture fund, Frst, Cathay Innovation, Headline, Hummingbird, NJF Capital, Owkin, Top Harvest Capital and Xavier Niel also participated in this funding round.

Bioptimus isn’t coming out of nowhere. At the helm of the company, Jean-Philippe Vert will act as co-founder and executive chairman in a non-operational role. At his day job, he is the Chief R&D Officer at Owkin, the French biotech unicorn that tries to discover new drugs and improve diagnostics through AI.

Rodolphe Jenatton, the CTO of Bioptimus, has more experience in artificial intelligence as he was a senior research scientist at Google. Several co-founders are also former researchers at Google DeepMind.

Image Credits: Bioptimus

As part of Owkin’s work for top biopharmas, Owkin has amassed multimodal patient data through partnerships with leading academic hospitals around the world. Bioptimus will leverage this unique data set to train its foundational model.

A moonshot project from Owkin

Bioptimus could even be considered as a sort of spin-off company from Owkin — or a so-called moonshot project. But why didn’t Owkin decide to work on foundational model in house? Creating new AI models is such a daunting task that creating a separate entity made more sense.

“Building biology [foundational models] is not a part of Owkin’s roadmap, but Owkin supports and is keen to partner with a company like Bioptimus. Training very large scale [foundational models] requires important resources in terms of data volume, computing power, and breadth of data modalities that are easier to unlock as a specific entity,” Jean-Philippe Vert told TechCrunch. “As a ‘pure player’ in foundational models, Bioptimus is better set up to do this.”

The startup has also signed a partnership with Amazon Web Services. It sounds like the company’s model will be trained in Amazon’s data centers. Now that Bioptimus is well funded, it’s time to work on the AI model and see what the biotech research community can do with it.

“Eventually, the AI we build will improve disease diagnosis, precision medicine, and will help create new biomolecules for medical or environmental use,” Vert said.



Source link

Continue Reading

Trending

Copyright © 2023 Dailycrunch. & Managed by Shade Marketing & PR Agency