Connect with us

Tech

News app turned X competitor Artifact now lets users generate AI images for their posts

Published

on


Artifact, the news aggregator recently turned X competitor, is adding a new generative AI feature. The app, built by Instagram’s co-founders, announced last week it would allow users to post their own updates, which didn’t have to include a link. Today, it’s expanding on that addition with the ability for users to create their own images to accompany their posts using generative AI.

The company believes the feature could help users make their posts more compelling as an eye-catching image can help tell their story. For example, it suggests users could create a landscape scene when posting about climate, or generate a concept car if talking about the future of EVs, among other things.

The feature, which has been in development over the past few months can be accessed by tapping on the plus “+” icon in the photo frame when creating a new w post on Artifact then choosing the option “create with AI.” Users can then enter in their prompt to see the generated image appear. The prompt can include a subject, a medium (like illustration or 3D image), and a style (like pop art or photo realistic, e.g.)

The company tells us it’s using a fine-tuned Stable Diffusion model for the image generation process.

Artifact says the whole process should take only a few seconds. However, if users aren’t satisfied with the results, they can re-use the same prompt to generate another image or revise the prompt to try again.

The addition is one of many AI technologies the app uses to help personalize its content to the end user. Originally, Artifact began as a newsreader of sorts, allowing AI to prioritize and surface the best content. The app also introduced AI technology to rewrite clickbait headlines and summarize stories so readers could get an overview of an article before diving in.

However, in more recent days, the app has been shifting away from being just a place to catch up on news to more of an X or Threads rival of sorts, as it added social features like commenting, user profiles, and the ability to post their own links and text-based posts. This broadens the content available through Artifact and makes the product more social, where curators of content can develop a following — not all that dissimilar from X (formerly Twitter).

With generative AI imagery, creators have another tool to attract users to their content and build their audience.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Women in AI: Rashida Richardson, senior counsel at Mastercard focusing on AI and privacy

Published

on

By


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Rashida Richardson is senior counsel at Mastercard, where her purview lies with legal issues relating to privacy and data protection in addition to AI.

Formerly the director of policy research at the AI Now Institute, the research institute studying the social implications of AI, and a senior policy advisor for data and democracy at the White House Office of Science and Technology Policy, Richardson has been an assistant professor of law and political science at Northeastern University since 2021. There, she specializes in race and emerging technologies.

Rashida Richardson, senior counsel, AI at Mastercard

Briefly, how did you get your start in AI? What attracted you to the field?

My background is as a civil rights attorney, where I worked on a range of issues including privacy, surveillance, school desegregation, fair housing and criminal justice reform. While working on these issues, I witnessed the early stages of government adoption and experimentation with AI-based technologies. In some cases, the risks and concerns were apparent, and I helped lead a number of technology policy efforts in New York State and City to create greater oversight, evaluation or other safeguards. In other cases, I was inherently skeptical of the benefits or efficacy claims of AI-related solutions, especially those marketed to solve or mitigate structural issues like school desegregation or fair housing.

My prior experience also made me hyper-aware of existing policy and regulatory gaps. I quickly noticed that there were few people in the AI space with my background and experience, or offering the analysis and potential interventions I was developing in my policy advocacy and academic work. So I realized this was a field and space where I could make meaningful contributions and also build on my prior experience in unique ways.

I decided to focus both my legal practice and academic work on AI, specifically policy and legal issues concerning their development and use.

What work are you most proud of in the AI field?

I’m happy that the issue is finally receiving more attention from all stakeholders, but especially policymakers. There’s a long history in the United States of the law playing catch-up or never adequately addressing technology policy issues, and five-six years ago, it felt like that may be the fate of AI, because I remember engaging with policymakers, both in formal settings like U.S. Senate hearings or educational forums, and most policymakers treated the issue as arcane or something that didn’t require urgency despite the rapid adoption of AI across sectors. Yet, in the past year or so, there’s been a significant shift such that AI is a constant feature of public discourse and policymakers better appreciate the stakes and need for informed action. I also think stakeholders across all sectors, including industry, recognize that AI poses unique benefits and risks that may not be resolved through conventional practices, so there’s more acknowledgement — or at least appreciation — for policy interventions.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

As a Black woman, I’m used to being a minority in many spaces, and while the AI and tech industries are extremely homogeneous fields, they’re not novel or that different from other fields of immense power and wealth, like finance and the legal profession. So I think my prior work and lived experience helped prepare me for this industry, because I’m hyper-aware of preconceptions I may have to overcome and challenging dynamics I’ll likely encounter. I rely on my experience to navigate, because I have a unique background and perspective having worked on AI in all industries — academia, industry, government and civil society.

What are some issues AI users should be aware of?

Two key issues AI users should be aware of are: (1) greater comprehension of the capabilities and limitations of different AI applications and models, and (2) how there’s great uncertainty regarding the ability of current and prospective laws to resolve conflict or certain concerns regarding AI use.

On the first point, there’s an imbalance in public discourse and understanding regarding the benefits and potential of AI applications and their actual capabilities and limitations. This issue is compounded by the fact that AI users may not appreciate the difference between AI applications and models. Public awareness of AI grew with the release of ChatGPT and other commercially available generative AI systems, but those AI models are distinct from other types of AI models that consumers have engaged with for years, like recommendation systems. When the conversation about AI is muddled — where the technology is treated as monolithic — it tends to distort public understanding of what each type of application or model can actually do, and the risks associated with their limitations or shortcomings.

On the second point, law and policy regarding AI development and use is evolving. While there are a variety of laws (e.g. civil rights, consumer protection, competition, fair lending) that already apply to AI use, we’re in the early stages of seeing how these laws will be enforced and interpreted. We’re also in the early stages of policy development that’s specifically tailored for AI — but what I’ve noticed both from legal practice and my research is that there are areas that remain unresolved by this legal patchwork and will only be resolved when there’s more litigation involving AI development and use. Generally, I don’t think there’s great understanding of the current status of the law and AI, and how legal uncertainty regarding key issues like liability can mean that certain risks, harms and disputes may remain unsettled until years of litigation between businesses or between regulators and companies produce legal precedent that may provide some clarity.

What is the best way to responsibly build AI?

The challenge with building AI responsibly is that many of the underlying pillars of responsible AI, such as fairness and safety, are based on normative values — of which there are no shared definitions or understanding of these concepts. So one could presumably act responsibly and still cause harm, or one could act maliciously and rely on the fact that there are no shared norms of these concepts to claim good-faith action. Until there are global standards or some shared framework of what is meant to responsibly build AI, the best way one can pursue this goal is to have clear principles, policies, guidance and standards for responsible AI development and use that are enforced through internal oversight, benchmarking and other governance practices.

How can investors better push for responsible AI?

Investors can do a better job at defining or at least clarifying what constitutes responsible AI development or use, and taking action when AI actor’s practices do not align. Currently, “responsible” or “trustworthy” AI are effectively marketing terms because there are no clear standards to evaluate AI actor practices. While some nascent regulations like the EU AI Act will establish some governance and oversight requirements, there are still areas where AI actors can be incentivized by investors to develop better practices that center human values or societal good. However, if investors are unwilling to act when there is misalignment or evidence of bad actors, then there will be little incentive to adjust behavior or practices.



Source link

Continue Reading

Tech

House punts on AI with directionless new task force

Published

on

By


The House of Representatives has founded a Task Force on artificial intelligence that will “ensure America continues leading in this strategic area,” as Speaker Mike Johnson put it. But the announcement feels more like a punt after years of indecision that show no sign of ending.

In a way this task force — chaired by California Reps Ted Lieu and Jay Obernolte — is a welcome sign of Congress doing something, anything, on an important topic that has become the darling of tech investment. But in another, more important way, it comes off as lip service at a time many feel AI and tech are running circles around regulators and lawmakers.

Furthermore, the dispiriting partisanship and obstruction on display every day in Congress renders quaint any notion that this task force would produce anything of value at any time, let alone during a historically divisive election year.

“As new innovations in AI continue to emerge, Congress and our partners in federal government must keep up. House Republicans and Democrats will work together to create a comprehensive report detailing the regulatory standards and congressional actions needed to both protect consumers and foster continued investment and innovation in AI,” said Rep. Obernolte in the announcement.

And Rep. Lieu: “AI has the capability of changing our lives as we know it. The question is how to ensure AI benefits society instead of harming us. As a recovering Computer Science major, I know this will not be an easy or quick or one-time task, but I believe Congress has an essential role to play in the future of AI. I have been heartened to see so many Members of Congress of all political persuasions agree.”

Of course, the White House, numerous agencies, the EU and countless other authorities and organizations are already issuing “comprehensive reports” and recommending legislative actions, but what’s one more?

It seems as though Congress realized that it was the last substantive entity to act on this industry-reshaping force, and so representatives reached across the aisle to pat each other on the back for taking the smallest possible step toward future legislation.

But at the same time, with Congress dysfunctional (having passed a historically low number of bills) and all eyes on the 2024 presidential election, this task force is just a way of kicking the can down the road until they know what they can get away with under the coming administration.

Certainly studying AI and its risks and benefits is not a bad thing — but it’s a little late in the day to be announcing it. This task force is long overdue, and as such we may welcome it but also treat it with the same skepticism that lawmakers pandering deserves.

Everyone involved with this will point to it when asked why they haven’t acted on AI, which many voters fear is coming for their jobs or automating processes that once had a purposeful human touch. “But we started this task force!” Yes, and the EU has had their task force working on this subject since the pandemic days.

The announcement of the task force kept expectations low, with no timeline or deliverables that voters or watchdogs can hold them to. Even the report is something they will only “seek” to produce!

Furthermore, considering the expert agencies are at risk of declawing via Supreme Court decision, it is hard to even imagine what a regulatory structure would look like a year from now. Want the FTC, FCC, SEC, EPA or anyone else to help out? They may be judicially restrained from doing so come 2025.

Perhaps this task force is Congress’s admission that during such tumultuous times, and lacking any real insight into an issue, all they can do is say “we’ll look into it.”



Source link

Continue Reading

Tech

Bioptimus raises $35 million seed round to develop AI foundational model focused on biology

Published

on

By


There’s a new generative AI startup based in Paris. But what makes Bioptimus interesting is that it plans to apply everything we’ve collectively learned about AI models over the past few years with a narrow, exclusive focus on biology.

The reason why it makes to sense to create a startup focused exclusively on biology is that access to training data isn’t as simple in this field. While OpenAI is slowly moving away from web crawling in favor of licensing deals with content publishers, Bioptimus is facing different data challenges as it will have to deal with sensitive clinical data that isn’t publicly available at all.

And just like other AI startups, Bioptimus is going to be a capital intensive startup as it will train its models on expensive GPUs and hire talented researchers. That’s why the startup is raising a $35 million seed round led by Sofinnova Partners. Bpifrance’s Large Venture fund, Frst, Cathay Innovation, Headline, Hummingbird, NJF Capital, Owkin, Top Harvest Capital and Xavier Niel also participated in this funding round.

Bioptimus isn’t coming out of nowhere. At the helm of the company, Jean-Philippe Vert will act as co-founder and executive chairman in a non-operational role. At his day job, he is the Chief R&D Officer at Owkin, the French biotech unicorn that tries to discover new drugs and improve diagnostics through AI.

Rodolphe Jenatton, the CTO of Bioptimus, has more experience in artificial intelligence as he was a senior research scientist at Google. Several co-founders are also former researchers at Google DeepMind.

Image Credits: Bioptimus

As part of Owkin’s work for top biopharmas, Owkin has amassed multimodal patient data through partnerships with leading academic hospitals around the world. Bioptimus will leverage this unique data set to train its foundational model.

A moonshot project from Owkin

Bioptimus could even be considered as a sort of spin-off company from Owkin — or a so-called moonshot project. But why didn’t Owkin decide to work on foundational model in house? Creating new AI models is such a daunting task that creating a separate entity made more sense.

“Building biology [foundational models] is not a part of Owkin’s roadmap, but Owkin supports and is keen to partner with a company like Bioptimus. Training very large scale [foundational models] requires important resources in terms of data volume, computing power, and breadth of data modalities that are easier to unlock as a specific entity,” Jean-Philippe Vert told TechCrunch. “As a ‘pure player’ in foundational models, Bioptimus is better set up to do this.”

The startup has also signed a partnership with Amazon Web Services. It sounds like the company’s model will be trained in Amazon’s data centers. Now that Bioptimus is well funded, it’s time to work on the AI model and see what the biotech research community can do with it.

“Eventually, the AI we build will improve disease diagnosis, precision medicine, and will help create new biomolecules for medical or environmental use,” Vert said.



Source link

Continue Reading

Trending

Copyright © 2023 Dailycrunch. & Managed by Shade Marketing & PR Agency