Connect with us


Data shows not all VC firms use the 2-and-20 rule



VCs often use the shorthand phrase “two and twenty” to refer to the 2% of annual management fees a venture fund might take and the 20% carried interest (or “performance fee”) it would charge. In a nutshell: If a venture fund turns a $100 million profit from its investments, the fund gets to keep $20 million of that, and the remaining $80 million is paid out to the limited partners.

The “2 and 20” fee structure was originally associated with hedge funds, but VC firms and other investment funds use it as well. The structure breaks down into two types of fees: a management fee and a performance fee.

The management fee is a yearly charge calculated based on the total assets under management (AUM). Typically, the management fee is 2% of AUM, but new data from Carta shows that the 2% figure isn’t as universal as you might have been led to believe.

First, it’s useful to understand what the management fee is for. Basically, it compensates the fund managers, regardless of the fund’s performance. So a VC firm that charges a 2% fee for managing a $100 million fund will receive $2 million per year to cover rent, staff costs, marketing, travel and, well, everything else.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *


House punts on AI with directionless new task force




The House of Representatives has founded a Task Force on artificial intelligence that will “ensure America continues leading in this strategic area,” as Speaker Mike Johnson put it. But the announcement feels more like a punt after years of indecision that show no sign of ending.

In a way this task force — chaired by California Reps Ted Lieu and Jay Obernolte — is a welcome sign of Congress doing something, anything, on an important topic that has become the darling of tech investment. But in another, more important way, it comes off as lip service at a time many feel AI and tech are running circles around regulators and lawmakers.

Furthermore, the dispiriting partisanship and obstruction on display every day in Congress renders quaint any notion that this task force would produce anything of value at any time, let alone during a historically divisive election year.

“As new innovations in AI continue to emerge, Congress and our partners in federal government must keep up. House Republicans and Democrats will work together to create a comprehensive report detailing the regulatory standards and congressional actions needed to both protect consumers and foster continued investment and innovation in AI,” said Rep. Obernolte in the announcement.

And Rep. Lieu: “AI has the capability of changing our lives as we know it. The question is how to ensure AI benefits society instead of harming us. As a recovering Computer Science major, I know this will not be an easy or quick or one-time task, but I believe Congress has an essential role to play in the future of AI. I have been heartened to see so many Members of Congress of all political persuasions agree.”

Of course, the White House, numerous agencies, the EU and countless other authorities and organizations are already issuing “comprehensive reports” and recommending legislative actions, but what’s one more?

It seems as though Congress realized that it was the last substantive entity to act on this industry-reshaping force, and so representatives reached across the aisle to pat each other on the back for taking the smallest possible step toward future legislation.

But at the same time, with Congress dysfunctional (having passed a historically low number of bills) and all eyes on the 2024 presidential election, this task force is just a way of kicking the can down the road until they know what they can get away with under the coming administration.

Certainly studying AI and its risks and benefits is not a bad thing — but it’s a little late in the day to be announcing it. This task force is long overdue, and as such we may welcome it but also treat it with the same skepticism that lawmakers pandering deserves.

Everyone involved with this will point to it when asked why they haven’t acted on AI, which many voters fear is coming for their jobs or automating processes that once had a purposeful human touch. “But we started this task force!” Yes, and the EU has had their task force working on this subject since the pandemic days.

The announcement of the task force kept expectations low, with no timeline or deliverables that voters or watchdogs can hold them to. Even the report is something they will only “seek” to produce!

Furthermore, considering the expert agencies are at risk of declawing via Supreme Court decision, it is hard to even imagine what a regulatory structure would look like a year from now. Want the FTC, FCC, SEC, EPA or anyone else to help out? They may be judicially restrained from doing so come 2025.

Perhaps this task force is Congress’s admission that during such tumultuous times, and lacking any real insight into an issue, all they can do is say “we’ll look into it.”

Source link

Continue Reading


Bioptimus raises $35 million seed round to develop AI foundational model focused on biology




There’s a new generative AI startup based in Paris. But what makes Bioptimus interesting is that it plans to apply everything we’ve collectively learned about AI models over the past few years with a narrow, exclusive focus on biology.

The reason why it makes to sense to create a startup focused exclusively on biology is that access to training data isn’t as simple in this field. While OpenAI is slowly moving away from web crawling in favor of licensing deals with content publishers, Bioptimus is facing different data challenges as it will have to deal with sensitive clinical data that isn’t publicly available at all.

And just like other AI startups, Bioptimus is going to be a capital intensive startup as it will train its models on expensive GPUs and hire talented researchers. That’s why the startup is raising a $35 million seed round led by Sofinnova Partners. Bpifrance’s Large Venture fund, Frst, Cathay Innovation, Headline, Hummingbird, NJF Capital, Owkin, Top Harvest Capital and Xavier Niel also participated in this funding round.

Bioptimus isn’t coming out of nowhere. At the helm of the company, Jean-Philippe Vert will act as co-founder and executive chairman in a non-operational role. At his day job, he is the Chief R&D Officer at Owkin, the French biotech unicorn that tries to discover new drugs and improve diagnostics through AI.

Rodolphe Jenatton, the CTO of Bioptimus, has more experience in artificial intelligence as he was a senior research scientist at Google. Several co-founders are also former researchers at Google DeepMind.

Image Credits: Bioptimus

As part of Owkin’s work for top biopharmas, Owkin has amassed multimodal patient data through partnerships with leading academic hospitals around the world. Bioptimus will leverage this unique data set to train its foundational model.

A moonshot project from Owkin

Bioptimus could even be considered as a sort of spin-off company from Owkin — or a so-called moonshot project. But why didn’t Owkin decide to work on foundational model in house? Creating new AI models is such a daunting task that creating a separate entity made more sense.

“Building biology [foundational models] is not a part of Owkin’s roadmap, but Owkin supports and is keen to partner with a company like Bioptimus. Training very large scale [foundational models] requires important resources in terms of data volume, computing power, and breadth of data modalities that are easier to unlock as a specific entity,” Jean-Philippe Vert told TechCrunch. “As a ‘pure player’ in foundational models, Bioptimus is better set up to do this.”

The startup has also signed a partnership with Amazon Web Services. It sounds like the company’s model will be trained in Amazon’s data centers. Now that Bioptimus is well funded, it’s time to work on the AI model and see what the biotech research community can do with it.

“Eventually, the AI we build will improve disease diagnosis, precision medicine, and will help create new biomolecules for medical or environmental use,” Vert said.

Source link

Continue Reading


Big Tech AI infrastructure tie-ups set for deeper scrutiny, says EU antitrust chief




The impact of AI must be front of mind for enforcers of merger control policy, the European Union’s antitrust chief and digital EVP, Margrethe Vestager, said yesterday, warning that “wide-reaching” digital markets can lead to unexpected economic effects. Speaking during a seminar discussing how to prevent tech giants like Microsoft, Google and Meta from monopolizing AI, she gave a verbal shot across the bows of Big Tech to expect more — and deeper — scrutiny of their operations.

“We have to look carefully at vertical integration and at ecosystems. We have to take account of the impact of AI in how we assess mergers. We even have to think about how AI might lead to new kinds of algorithmic collusion,” she said.

Her remarks suggest the bloc will be a lot more active in its assessments of tech M&A going forward — and, indeed, cosy AI partnerships.

Last month the EU said it would look into whether Microsoft’s investment in generative AI giant OpenAI is reviewable under the bloc’s merger regulations.

Vestager’s address was also notable for clearly expressing that competition challenges are inherent to how cutting edge AI is developed, with the Commission EVP flagging “barriers to entry everywhere”.

“Large Language Models [LLMs] depend on huge amounts of data, they depend on cloud space, and they depend on chips. There are barriers to entry everywhere. Add to this the fact that the tech giants have the resources to acquire the best and brightest talent,” she said. “We’re not going to see disruption driven by a handful of college drop-outs who somehow manage to outperform Microsoft’s partner Open AI or Google’s DeepMind. The disruption from AI will come from within the nest of existing tech ecosystems.”

The blistering rise of generative AI over the past year+ has shone a spotlight on how developments are dominated by a handful of firms with either close ties to familiar Big Tech platforms or who are tech giants themselves. Examples include ChatGPT maker OpenAI’s close partnership with hyperscaler Microsoft; Google and Amazon ploughing investment into OpenAI rival Anthropic; and Facebook’s parent Meta mining its social media data mountain to develop its own series of foundational models (aka LLaMA).

How European AI startups can hope to compete without equivalent access to key AI infrastructure was a running thread in the seminar discussions.

Challenges and uncertainties

“We’ve seen LLaMa 2 being open sourced. Will LLaMa 3 also be open sourced?” wondered Tobias Haar, general counsel of the German foundational model AI startup Aleph Alpha, speaking during a panel discussion that followed Vestager’s address. “Will there be companies that rely on open source Large Language Models that suddenly, at least not in the next iterative stage, are no longer available as an open source?”

Haar emphasized that uncertainty over access to key AI inputs is why the startup took the decision to invest in building and training its own foundational models in its own data center — “in order to keep and maintain this independence”. At the same time, he flagged the challenge inherent for a European startup in trying to compete with US hyperscalers and the dedicated compute resource they can roll out for training AIs with their chosen partners.

Aleph Alpha’s own data center runs 512 A100 Nvidia GPUs — the “largest commercial AI cluster” in Europe, per Haar. But he emphasized this pales in comparison to Big Tech’s infrastructure for training — pointing to Microsoft’s announcement that it would be installing circa 10,000 GPUs in the UK last year alone, as part of a £25BN investment over three years (which will actually fund more than 20,000 GPUs by 2026).

“In order to put it into perspective — and perspective is also what is relevant in the competition, legal assessment of what is going on in the market field — we run 512 A100 GPUs by Nvidia,” he said. “This is a lot because it makes us somewhat independent but it’s still nothing compared to the sheer computing power there is for other organisations to train and to fine tune their LLMs on. And I know that OpenAI has been training the LLMs — but I understand that Microsoft is fine tuning them also to their needs. So this is already [not a level playing field].”

In her address, Vestager did not offer any concrete plan for how the bloc might move to level the playing field for homegrown generative AI startups — nor even entirely commit to the need for the bloc to intervene. (But tackling digital market concentration, which was built up, partially, under her watch, remains a tricky subject for the EU — which has increasingly been accused of regulating everything but changing nothing when it comes to Big Tech’s market power.)

Nonetheless, her address suggests the EU is preparing to get a lot tougher and more comprehensive in scrutinizing tech deals, as a consequence of recent developments in AI.

Only a handful of years ago Vestager cleared Google’s controversial acquisition of fitness wearable maker Fitbit, accepting commitments from the tech giant it wouldn’t use Fitbit’s data for ads for a period of ten-years — but leaving the tech giant free to mine users’ data for other purposes, including AI. (To wit: Last year Google added a generative AI chatbot to the Fitbit app.)

But the days of Big Tech getting to cherry-pick acquisition targets, and grab juicy-looking AI training data, may be winding down in Europe.

Vestager also implied the bloc will seek to make full use of existing competition tools, including the Digital Markets Act (DMA) — an ex ante competition reform which comes into application on six tech giants (including Microsoft, Google and Meta) early next month — as part of its playbook to shape how the AI market develops, suggesting the EU’s competition policy must work hand-in-glove with digital regulations to keep pace with risks and harms.

There have been doubts over how — or even whether — the DMA applies to generative AI, given no cloud services have so far been designated under the regulation as so-called “core platform services”. So there are worries the bloc has, once again, missed the boat when it comes to putting meaningful market controls on the next wave of disruptive tech.

In her address, Vestager rejected the idea it’s already too late for the EU to prevent Big Tech sewing up AI markets — tentatively suggesting “we can make an impact” — but she also warned the “window of opportunity” for enforcers and lawmakers to shape outcomes that are “truly beneficial to our economy, to our citizens and to our democracies”, as she put it, will only be briefly open.

Still, her speech raised a lot more questions over how enforcers and policymakers should respond to the layered challenges thrown up by AI — including democratic integrity, intellectual property and the ethical application of such systems, to name a few — than she had actual solutions. She also sounded a bit hesitant when it came to how to weigh competition considerations with the broader sweep of societal harms AI use may entail. So her message — and resolve — seemed a bit conflicted.

“There are still big questions around how intellectual property rights are respected. About how ethical AI is deployed. About areas where AI should never be deployed. In each of these decisions, there is a competition policy dimension that needs to be considered. Conversely, how AI regulation is enforced will affect the openness and accessibility of the markets it impacts,” she said, implying there may be trade offs between regulating AI risks and creating a vibrant AI ecosystem.

“There are questions around input neutrality and the influence such systems could have on our democracies. A Large Language Model is only as good as the inputs it receives, and for this there must always be a discretionary element. Do we really want our opinion-making to be reliant on AI systems that are under the control not of the European people — but of tech oligarchs and their shareholders?” she also wondered, suggesting the bloc may need to think about drafting even more laws to regulate AI risks.

Clearly, coming with more laws now is not a recipe for instant action on AI — yet her speech literally called for “acting swiftly” (and “thinking ahead” and “cooperating”) to maximise the benefits of AI while minimizing the risk.

Overall, despite the promise of more intelligent merger scrutiny, the tone she struck veered toward ‘managing expectations’. And her call to action appealed to a broader collective of international enforcers, regulators and policymakers to join forces to fix this one — rather than the EU sticking its head above the parapet.

While Vestager avoided instant answers for derailing Big Tech’s well-funded dash to monopolize AI, other panellists offered a few.


The fieriest ideas came from Barry Lynn of the Washington-based Open Markets Institute, a non-profit whose stated mission starts with stopping monopolies. “Let’s break off cloud,” he suggested. “Let’s turn cloud into a utility. It’s pretty easy to do. This is actually one of the easiest solutions we can embrace right now — and it would take away a huge amount of their leverage.”

He also called for a blanket non-discrimination regime (i.e. “common carrier” type rules for platforms to prohibit price discrimination and information manipulation); and for a requisitioning of aggregated “public data” tech giants have amassed by tracking web users. “Why does Google own the data? That’s our data,” he argued. “It’s public data… It doesn’t belong to Google — doesn’t belong to any of these folks. It’s our data. Let’s exert ownership over it.”

Microsoft’s director of competition, Carel Maske, who had — awkwardly enough — been seated right next to Lynn on the panel, all but broke into a sweat when the moderator offered him the chance to respond to that. “I think there’s a lot to discuss,” he hedged, before doing his best to brush aside Lynn’s case for immediate structural separation of hyperscalers.

“I’m not sure you are addressing, really, the needs of the investments that are needed in cloud and infrastructure,” he got out, dangling a skeletal argument against being broken up (i.e. that structural separation of Big Tech from core AI infrastructure would undermine the investment needed to drive innovation forward), before hurrying to route the chat back to more comfortable topics (like “how to make competition tools work” or “what the appropriate regulatory framework is”), which Microsoft evidently feels won’t prevent Big Tech business as usual.

Talking of whether existing competition tools are able to do the job of bringing tech giants’ scramble for AI to heel, another panellist, Andreas Mundt — president of the German competition authority, the Federal Cartel Office (FCO) — had a negative perspective to recount, drawn from recent experience.

Existing merger processes have already failed, domestically, to tackle Microsoft’s cosy relationship with OpenAI, he pointed out. The FCO took an early look at whether the partnership should be subject to merger control — before deciding, last November, the arrangement did not “currently” meet the bar.

During the panel, Mundt said he would have liked a very different outcome. He argued tech giants have — very evidently — changed tack from the earlier “killer acquisition” strategy they deployed to slay emergent competition — to a softer partnership model that allows these close engagements to fly under enforcers’ radar.

“All we see are very soft cooperations,” he noted. “This is why we looked at this Microsoft OpenAI issue — and what did we find? Well, we were not very happy about it but from a formal point of view, we could not say this was a merger.

“What we found — and this should not be underestimated — in 2019 when Microsoft invested more than €1 billion into OpenAI we saw the creation of a substantial competitive influence of Microsoft into OpenAI. And that was long before Sam Altman was fired and rehired again. So there is this influence, as we see it, and this is why merger control is so important.

“But we could not prohibit that as a merger, by the way, because by that time, OpenAI had no impact in Germany — they weren’t active on German markets — this is why it was not a merger from our perspective. But what remains, it is very, very important, there is this substantial, competitive influence — and we must look at that.”

Asked what he would have liked to be able to do about Microsoft OpenAI, the FCO’s Mundt said he wanted to look at the core question: “Was it a merger? And was it a merger that maybe needs to go to phase two — that we should assess and maybe block?”

Striking a more positive note, the FCO president professed himself “very happy” the European Commission took the subsequent decision — last month — to open its own proceeding to check whether Microsoft and OpenAI’s partnership falls under the bloc’s merger rules. He also highlighted the UK competition authority’s move here, in December, when it said it would look at whether the tie-up amounts to a “relevant merger” situation.

Those proceedings are ongoing.

“I can promise you, we will look at all these cooperations very carefully — and if we see, if it only gets close to a merger, we will try to get it in [to merger rules],” Mundt added, factoring fellow enforcers’ actions into his calculation of what success looks like here.

A whole army of competition and digital rule enforcers working together — even in parallel — to attack the knotty problems thrown up by Big Tech + AI was also named by Vestager as a critical piece for cracking this puzzle. (And on this front, she encouraged responses to an open consultation on generative AI and virtual worlds the competition unit is running open until March 11.)

“For me, the very first lesson from our experience so far is that our impact will always be greatest when we work together, communicate clearly, and act early on,” she emphasized, adding: “I will continue to engage with my counterparts in the United States and elsewhere, to align our approach as much as possible.”

Source link

Continue Reading


Copyright © 2023 Dailycrunch. & Managed by Shade Marketing & PR Agency