Connect with us

Tech

Unitary AI picks up $15M for its multimodal approach to video content moderation

Published

on


Content moderation continues to be a contentious topic in the world of online media. New regulations and public concern are likely to keep it as a priority for many years to come. But weaponised AI and other tech advances are making it ever harder to address. A startup out of Cambridge, England, called Unitary AI believes it has landed on a better way to tackle the moderation challenge — by using a “multimodal” approach to help parse content in the most complex medium of all: video.

Today, Unitary is announcing $15 million in funding to capitalise on momentum it’s been seeing in the market. The Series A — led by top European VC Creandum, with participation also from Paladin Capital Group and Plural — comes as Unitary’s business is growing. The number of videos it is classifying has jumped this year to 6 million/day from 2 million (covering billions of images) and the platform is now adding on more languages beyond English. It declined to disclose names of customers but says ARR is now in the millions.

Unitary is using the funding to expand into more regions and to hire more talent. Unitary is not disclosing its valuation; it previously raised under $2 million and a further $10 million in seed funding; other investors include the likes of Carolyn Everson, the ex-Meta exec.

There have been dozens of startups over recent years harnessing different aspects of artificial intelligence to build content moderation tools.

And when you think about it, the sheer scale of the challenge in video is an apt application for it. No army of people would alone ever be able to parse the tens and hundreds of zettabytes of data that being created and shared on platforms like YouTube, Facebook, Reddit or TikTok — to say nothing of dating sites, gaming platforms, videoconferencing tools, and other places where videos appear, altogether making up more than 80% of all online traffic.

That angle is also what interested investors. “In an online world, there’s an immense need for a technology-driven approach to identify harmful content,” said Christopher Steed, chief investment officer, Paladin Capital Group, in a statement.

Still, it’s a crowded space. OpenAI, Microsoft (using its own AI, not OpenAI’s), Hive, Active Fence / Spectrum Labs, Oterlu (now part of Reddit), and Sentropy (now a part of Discord), and Amazon’s Rekognition are just a few of the many out there in use.

From Unitary AI’s point of view, existing tools are not as effective as they should be when it comes to video. That’s because tools have been built up to now typically to focus on parsing data of one type or another — say, text or audio or image — but not in combination, simultaneously. That leads to a lot of false flags (or conversely no flags).

“What is innovative about Unitary is that we have genuine multimodal models,” CEO Sasha Haco, who cofounded the company with CTO James Thewlis. “Rather than analyzing just a series of frames, in order to understand the nuance and whether a video is [for example] artistic or violent, you need to be able to simulate the way a human moderator watches the video. We do that by analysing text, sound and visuals.”

Customers put in their own parameters for what they want to moderate (or not), and Haco said they typically will use Unitary in tandem with a human team, which in turn will now have to do less work and face less stress.

“Multimodal” moderation seems so obvious; why hasn’t it been done before?

Haco said one reason is that “you can get quite far with the older, visual-only model”. However, it means there is a gap in the market to grow.

The reality is that the challenges of content moderation have continued to dog social platforms, games companies and other digital channels where media is shared by users. Lately, social media companies have signalled a move away from stronger moderation policies; fact checking organizations are losing momentum; and questions remain about the ethics of moderation when it comes to harmful content. The appetite for fighting has waned.

But Haco has an interesting track record when it comes to working on hard, inscrutable subjects. Before Unitary AI, Haco — who holds a PhD in quantum physics — worked on black hole research with Stephen Hawking. She was there when that team captured the first image of a black hole, using the Event Horizon Telescope, but she had an urge to shift her focus to work on earthbound problems, which can be just as hard to understand as a spacetime gravity monster.

Her “ephiphany,” she said, was that there were so many products out there in content moderation, so much noise, but nothing yet had equally matched up with what customers actually wanted.

Thewlis’s expertise, meanwhile, is directly being put to work at Unitary: he also has a PhD, his in computer vision from Oxford, where his speciality was “methods for visual understanding with less manual annotation.”

(‘Unitary’ is a double reference, I think. The startup is unifying a number of different parameters to better understand videos. But also, it may refer to Haco’s previous career: unitary operators are used in describing a quantum state, which in itself is complicated and unpredictable — just like online content and humans.)

Multimodal research in AI has been ongoing for years. But we seem to be entering an era where we are going to start to see a lot more applications of the concept. Case in point: Meta just last week referenced multimodal AI several times in its Connect keynote previewing its new AI assistant tools. Unitary thus straddles that interesting intersection of cutting edge-research and real-world application.

“We first met Sasha and James two years ago and have been incredibly impressed,” said Gemma Bloemen, a principal at Creandum and board member, in a statement. “Unitary has emerged as clear early leaders in the important AI field of content safety, and we’re so excited to back this exceptional team as they continue to accelerate and innovate in content classification technology.”

“From the start, Unitary had some of the most powerful AI for classifying harmful content. Already this year, the company has accelerated to 7 figures of ARR, almost unheard of at this early stage in the journey,” said Ian Hogarth, a partner at Plural and also a board member.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

NASA reveals footage of astronauts training in desert for moon mission

Published

on

By


It’s taken more than half a century, but NASA really is going back to the moon.

Some of the space agency’s astronauts have been training in the Northern Arizona desert for the looming Artemis 3 mission, which is currently slated to land in September 2026. Decades of other U.S. space priorities (such as the Space Shuttle and building the International Space Station), along with the astronomical costs of sending astronauts to our natural satellite, have impeded such a return endeavor.

But after the successful launch of NASA’s new megarocket in 2022 — the Space Launch System — the moon mission’s wheels are turning, albeit slowly. That’s because every component of the agency’s new lunar campaign, dubbed Artemis, must be profoundly safe. Lives will be aboard.

NASA has released images of the astronauts’ May 2024 training in the desert, including a recent view of NASA astronauts Kate Rubins and Andre Douglas simulating a nighttime space walk (the official Artemis 3 astronaut crew has yet to be announced). Training in the dark or twilight is essential, as the conditions mimic the dark, shadowy regions Artemis astronauts will explore: NASA is going to the moon’s south pole region, a place where the sun barely rises over the lunar hills. It’s a world of profoundly long shadows and dim environs.

Mashable Light Speed

The endeavor you see below is called the Joint Extravehicular Activity and Human Surface Mobility Test Team Field Test 5, or JETT5.

NASA astronauts Kate Rubins and Andre Douglas simulating a moonwalk for the looming Artemis 3 mission.

NASA astronauts Kate Rubins and Andre Douglas simulating a moonwalk for the looming Artemis 3 mission.
Credit: NASA / Josh Valcarcel

On left: Astronaut Andre Douglas reviews sample collection procedures. On right: Astronaut Kate Rubins ensures she has the necessary tools.

On left: Astronaut Andre Douglas reviews sample collection procedures. On right: Astronaut Kate Rubins ensures she has the necessary tools.
Credit: NASA / Josh Valcarcel

Astronaut Kate Rubins used a hammer to drive in tube that will collect soil samples from the ground. On the moon, these samples will be sealed and then returned to Earth.

Astronaut Kate Rubins used a hammer to drive in tube that will collect soil samples from the ground. On the moon, these samples will be sealed and then returned to Earth.
Credit: NASA / Josh Valcarcel

The two astronauts pushing a tool cart across the desert surface.

The two astronauts pushing a tool cart across the desert surface.
Credit: NASA / Josh Valcarcel

NASA captured these images in a rugged region called the San Francisco Volcanic Field. The area astronauts are headed to is also quite rugged. It’s a heavily cratered region, teeming with volcanic rocks. Crucially, they’ll be hunting for ice deposits, too.

“The ice deposits could also serve as an important resource for exploration because they are comprised of hydrogen and oxygen that can be used for rocket fuel or life support systems,” NASA explained.

The moon may one day serve as a lunar fuel depot, where after burning copious amounts of fuel during launch, spacecraft stop to fill up for deeper space missions. They may be headed to Mars, resource-rich asteroids, or beyond.





Source link

Continue Reading

Tech

Webb Telescope Discovers Galaxies Formed Right After Birth Of The Universe With Earliest Elements

Published

on

By



A group of astronomers sifting through the James Webb Space Telescope have found three galaxies from the earliest universe. According to their findings, which have been published in the journal Science, the universe was just 400 to 600 million years old when the said galaxies were born. According to current estimates, the universe is about 13.8 billion years old.

Kasper Heintz, the lead author and an assistant professor of astrophysics at the University of Copenhagen, called these galaxies “sparkling islands in a sea of otherwise neutral, opaque gas.”

ALSO SEE: Webb Telescope Finds Best Evidence Of Potential Atmosphere Around A ‘Super-Earth’

Scientists believe that the universe was very different during the Era of Reionisation – the period of several hundred million years after the big bang. At this point, gas between stars and galaxies was largely opaque and things became transparent only after one billion year later.

About the galaxies discovered using the Webb telescope‘s data, they are believed to be surrounded by almost purely hydrogen and helium which are the earliest elements to form in the universe.

Darach Watson, a co-author of the paper, said that the large gas resorvoirs suggest that “the galaxies have not had enough time to form most of their stars yet.”

Moving forward, the researchers will work to build large statistical samples of these galaxies and measure the prevalence and prominence of their features.

ALSO SEE: Webb Telescope Discovers Oldest Ever Black Hole Merger From Over 13 Billion Years Ago

(Image: NASA)





Source link

Continue Reading

Tech

Neuralink’s Rival Company Precision Creates World Record By Placing Over 4,000 Electrodes In Human Brain

Published

on

By


Elon Musk-owned Neuralink’s rival Precision Neuroscience has set the world record for placing 4,096 electrodes in the human brain. It is double the number of electrodes placed last year – 2,048.

According to the official statement, the record-setting operation took place in April at the Mount Sinai Health System in New York, as part of an ongoing clinical trial for the brain chip.

Precision’s chip in the brain. Image: Precision Neuroscience

Precision’s implant uses a thin-film microelectrode array containing 1,024 miniature electrodes covering 1.6 square cm of area. Four such arrays were placed on the patient’s brain.

More number of electrodes will ensure higher data transmission to and from the brain, and this will determine the capability of the chip.

ALSO SEE: Neuralink’s Paralysed Patient Desires A Tesla Robot Assistant He Can Control With His Mind

“This record is a significant step towards a new era. The ability to capture cortical information of this magnitude and scale could allow us to understand the brain in a much deeper way,” said Benjamin Rapoport, Precision’s co-founder and Chief Science Officer.

Also a co-founder of Neuralink, Rapoport exited the company and established Precision with two other Neuralink members in 2021.

According to Ars Technica, he told The Wall Street Journal that the reason for his exit from Neuralink were the safety concerns regarding the brain implants which he says are too invasive.

ALSO SEE: Elon Musk’s Neuralink Gets Approval For Second Chip Implant In Human Brain

The company claims that its ‘Layer 7 Cortical Interface’ can conform to the brain’s cortex with minimal invasiveness and without damaging any tissue.

Neuralink is currently at the forefront in the brain-computer interface game. It implanted the chip in the first patient earlier this year and is preparing for the second operation.

As for Precision, it is testing its chip through research collaborations with West Virginia University’s Rockefeller Neuroscience Institute, Perelman School of Medicine (Penn Medicine), and New York’s Mount Sinai Health System.

(Image: Precision Neuroscience)





Source link

Continue Reading

Trending

Copyright © 2023 Dailycrunch. & Managed by Shade Marketing & PR Agency