Connect with us

Tech

Rabbit is building an AI model that understands how software works

Published

on


What if you could interact with any piece of software using natural language? Imagine typing in a prompt and having AI translate the instructions into machine-comprehendable commands, executing tasks on a PC or phone to accomplish the goal that you just described?

That’s the idea behind Rabbit, a rebranding of Cyber Manufacture Co., which is building a custom, AI-powered UI layer designed to sit between a user and any operating system.

Founded by Jesse Lyu, who holds a bachelor’s degree in mathematics from the University of Liverpool, and Alexander Liao, previously a researcher at Carnegie Mellon, Rabbit is creating a platform, Rabbit OS, underpinned by an AI model that can — so Lyu and Liao claim — see and act on desktop and mobile interfaces the same ways that humans can.

“The advancements in generative AI have ignited a wide range of initiatives within the technology industry to define and establish the next level of human-machine interaction,” Lyu told TechCrunch in an email interview. “Our perspective is that the ultimate determinant of success lies in delivering an exceptional end-user experience. Drawing upon our past endeavors and experiences, we’ve realized that revolutionizing the user experience necessitates a bespoke and dedicated platform and device. This fundamental principle underpins the current product and technical stack chosen by Rabbit.”

Rabbit — which has $20 million in funding contributed by Khosla Ventures, Synergis Capital and Kakao Investment, which a source familiar with the matter says values the startup at between $100 million and $150 million — isn’t the first to attempt a layering natural language interface on top of existing software.

Google’s AI research lab, DeepMind, has explored several approaches for teaching AI to control computers, for example having an AI observe keyboard and mouse commands from people completing “instruction-following” tasks such as booking a flight. Researchers at Shanghai Jiao Tong University recently open sourced a web-navigating AI agent that they claim can figure out how to do things like use a search engine and order items online. Elsewhere, there’s apps like the viral Auto-GPT, which tap AI startup OpenAI’s text-generating models to act “autonomously,” interacting with apps, software and services both online and local, like web browsers and word processors.

But if Rabbit has a direct rival, it’s probably Adept, a startup training a model, called ACT-1, that can understand and execute commands such as “generate a monthly compliance report” or “draw stairs between these two points in this blueprint” using existing software like Airtable, Photoshop, Tableau and Twilio. Co-founded by former DeepMind, OpenAI and Google engineers and researchers, Adept has raised hundreds of millions of dollars from strategic investors including Microsoft, Nvidia, Atlassian and Workday at a valuation of around $1 billion.

So how does Rabbit hope to compete in the increasingly crowded field? By taking a different technical tack, Lyu says.

While it might sound like what Rabbit’s creating is akin to robotic process automation (RPA), or software robots that leverage a combination of automation, computer vision and machine learning to automate repetitive tasks like filing out forms and responding to emails, Lyu insists that it’s more sophisticated. Rabbit’s core interaction model can “comprehend complex user intentions” and “operating user interfaces,” he says, to ultimately (and maybe a little hyperbolically) “understand human intentions on computers.”

“The model can already interact with high-frequency, major consumer applications — including Uber, Doordash, Expedia, Spotify, Yelp, OpenTable and Amazon — across Android and the web,” Lyu said. “We seek to extend this support to all platforms (e.g. Windows, Linux, MacOS, etc.) and niche consumer apps next year.”

Rabbit’s model can do things like book a flight or make a reservation. And it can edit images in Photoshop, using the appropriate built-in tools.

Or rather, it will be able to someday. I tried a demo on Rabbit’s website and the model’s a bit limited in functionality at the moment — and it seems to get confused by this fact. I prompted the model to edit a photo and it instructed me to specify which one — an impossibility given that the demo UI lacks an upload button or even a field to paste in an image URL.

The Rabbit model can indeed, though, answer questions that require canvassing the worldwide web, a la ChatGPT with web access. I asked it for the cheapest flights available from New York to San Francisco on October 5, and — after about 20 seconds — it gave me an answer that appeared to be factually accurate, or at least plausible. And the model correctly listed at least a few TechCrunch podcasts (e.g. “Chain Reaction”) when asked to do so, beating an early version of Bing Chat in that regard.

Rabbit’s model was less inclined to respond to more problematic prompts such as instructions for making a dirty bomb and one questioning the validity of the Holocaust. Clearly, the team’s learned from some of the mistakes of large language models past (see: the early Bing Chat’s tendency to go off the rails) — at least judging by my very brief testing.

Rabbit

The demo model on Rabbit’s site, which is a bit limited in functionality.

“By leveraging [our model], the Rabbit platform empowers any user, regardless of their professional skills, to teach the system how to achieve specific goals on applications,” Lyu explains. “[The model] continuously learns and imitates from aggregated demonstrations and available data on the internet, creating a ‘conceptual blueprint’ for the underlying services of any application.”

Rabbit’s model is robust to a degree to “perturbations,” Lyu added, like interfaces that aren’t presented in a consistent way or that change over time. It simply has to “observe,” via a screen-recording app, a person using a software interface at least once.

Now, it’s not clear just how robust the Rabbit model is. In fact, the Rabbit team doesn’t know itself — at least not precisely. And that’s not terribly surprising, considering the countless edge cases that can crop up in navigating a desktop, smartphone or web UI. That’s why, in addition to building the model, the company’s architecting a framework to test, observe and refine the model as well as infrastructure to validate and run future versions of the model in the cloud.

Rabbit also plans to release dedicated hardware to host its platform. I question the wisdom of that strategy, given how difficult scaling hardware manufacturing tends to be, the consumer hostileness of vendor lock-in and the fact that the device might have to eventually compete against whatever OpenAI’s planning. But Lyu — who curiously wouldn’t tell me exactly what the hardware will do or why it’s necessary — admits that the roadmap’s a bit in flux at the moment.

“We are building a new, very affordable, and dedicated form factor for a mobile device to run our platform for natural language interactions,” Lyu said. “It’ll be the first device to access our platform … We believe that a unique form factor allows us to design new interaction patterns that are more intuitive and delightful, offering us the freedom to run our software and models that the existing platforms are unable to or don’t allow.”

Hardware isn’t Rabbit’s only scaling challenge, should it decide to pursue its proposed hardware strategy. A model like the one Rabbit’s building presumably needs a lot of examples of successfully completed tasks in apps. And collecting that sort of data can be a laborious — not to mention costly — process.

For example, in one of the DeepMind studies, the researchers wrote that, in order to collect training data for their system, they had to pay 77 people to complete over 2.4 million demonstrations of computer tasks. Extrapolate that out, and the sheer magnitude of the problem comes into sharp relief.

Now, $20 million can go a long way — especially since Rabbit’s a small team (9 people) currently working out of Lyu’s house. (He estimates the burn rate at around $250,000.) I wonder, though, whether Rabbit will be able to keep up with the more established players in the space — and how it’ll combat new challengers like Microsoft’s Copilot for Windows and OpenAI’s efforts to foster a plugin ecosystem for ChatGPT.

Rabbit is nothing if not ambitious, though — and confident it can make business-sustaining money through licensing its platform, continuing to refine its model and selling custom devices. Time will tell.

“We haven’t released a product yet, but our early demos have attracted tens and thousands of users,” Lyu said. “The eventual mature form of models that the Rabbit team will be developing will work with data that they have yet to collect and will be evaluated on benchmarks that they have yet to design. This is why the Rabbit team is not building the model alone, but the full stack of necessary apparatus in the operating system to support it … The Rabbit team believes that the best way to realize the value of cutting-edge research is by focusing on the end users and deploying hardened and safeguarded systems into production quickly.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Geminid Meteor Shower 2024: All About This Year’s Best Cosmic Show And When To Spot It

Published

on

By



Gear up for the last and best meteor shower of the year. The Geminid meteor shower will peak on December 13-14 bringing shooting stars to the night skies. Under the right conditions, skygazers could get to witness over 120 meteors per hour as Geminids are considered the brightest of the year.

The Geminid meteors originate from the asteroid 3200 Phaethon. According to NASA, this asteroid takes about 1.4 years to orbit the Sun once. Scientists think Phaethon is a “dead comet” or a new kind of object being discussed by astronomers called a “rock comet.”

This hypothesis is fuelled by the asteroid’s highly elliptical orbit around the Sun which is very comet-like.

When Earth, while orbiting the Sun, passes through its trail of debris left behind, its atmosphere interact with it and we get the meteor shower. Notably, the Geminid meteor shower usually peaks mid-December but is active through December 21-22.

ALSO SEE: The Planets Are Stunning In December 2024 — And No Telescope Is Needed

Will you be able to watch?

The conditions this time are not exactly favourable as the Moon will be nearly full. According to NASA, the meteors will peak under the nearly full Moon and its illumination might wash out the meteoric brightness reducing visibility. The Moon will be full on December 15.

But you can still spot it in relatively lower numbers before dawn. The meteors will appear like specks of dust streaking through the morning sky.

Try finding the Gemini constellation as this is where the meteors appear to be originating from. This constellation is also what inspired the name Geminids.

ALSO SEE: Auroras And Perseid Meteor Shower Dazzle Night Sky Across The World; See Pictures

(Image: NASA)





Source link

Continue Reading

Tech

NASA’s Jupiter-Bound Lucy Spacecraft To Complete 2nd Earth Flyby On Dec 13; Watch Live Here

Published

on

By



A space probe is heading toward Earth for a gravity assist. NASA’s Lucy spacecraft will make its closest approach to our planet at 9:45 am IST on December 13 from a distance of just 360 kilometres. It will pass over the United States which will be in darkness.

“This close flyby will result in a “gravity assist,” putting the spacecraft on a new trajectory that travels through the main asteroid belt and out to the never-before-explored Jupiter Trojan asteroids, small bodies that orbit the Sun at the same distance as Jupiter,” NASA said in a statement.

This will be the second gravity assist Lucy will need in its 12-year voyage to Jupiter’s trojan asteroids. The first successful flyby which put it on a two-year orbit, occurred on October 16, 2022. After the first flyby, it flew past the asteroid Dinkinesh and its satellite Selam.

NASA says the next flyby will put Lucy on a six-year orbit sending it through the main asteroid belt where it will fly past the asteroid Donaldjohanson. Lucy will then fly into the Trojan asteroid swarm that leads Jupiter in its orbit. It’s first trojan asteroid encounter is set for 2027. The purpose of the Lucy mission is to understand the formation of planets and ultimately the solar system.

ALSO SEE: NASA’s Lucy Mission Found A Satellite Orbiting Asteroid Eurybates

Where to watch Lucy’s flyby?

You can watch Lucy’s flyby during a livestream hosted by the Virtual Telescope Project. It has scheduled a webcast at 10 am IST on YouTube which you can access below.

NASA says that Lucy will approach Earth from the direction of the Sun, meaning it will be lost in the solar glare. However, observes in the Hawaiian islands may be able to catch a glimpse before it passes into Earth’s shadow. It may also be visible to telescopes in west Africa and eastern regions of South America when Lucy emerges from Earth’s shadow 20 minutes after the closest approach.

Lucy will be travelling at speeds over 53,000 km per hour at the time of flyby, NASA said.

ALSO SEE: NASA’s Lucy Spacecraft Captures Terrifying Views Of Earth And Moon

(Image: NASA)





Source link

Continue Reading

Tech

ISRO’s Aditya-L1 Unlocks New Secrets Of The Sun During Solar Eruption

Published

on

By


Scientists in India recently observed the Sun’s outermost atmospheric layer called corona using the Aditya-L1 observatory and made significant discoveries. ISRO says that the observation was carried out after a coronal mass ejection (CME) event that occurred on July 16.

The Sun’s corona is very mysterious and scientists don’t know much about what makes it hotter than the solar surface. The corona is about a million degrees Celsius whereas the surface is just about 5,600 degrees Celsius. It is also believed to hold secrets about the Sun’s activity and how it affects the space weather.

The Aditya-L1 observatory. Image: ISRO

From the corona, emerges coronal mass ejections (CMES) which is the expulsion of plasma or highly charged particles. These particles, when interact with Earth’s atmosphere and magnetosphere create auroras but can also disrupt satellite communications, GPS systems, and power grids.

ALSO SEE: ISRO’s Aditya-L1 Observatory Completes First Orbit At Lagrange Point 1; What Is It?

What did Aditya-L1 discover?

The researchers detailed in ‘The Astrophysical Journal Letter’ the findings that were made using the Aditya-L1’s Visible Emission Line Coronagraph (VELC).

According to experts, they observed a phenomenon called ‘coronal dimming’ which is the reduction in the brightness of the corona after the July 16 CME. “The brightness in that area dropped by about 50%, a decrease caused by the ejection of solar material. This reduction in brightness lasted for about 6 hours,” ISRO said in a statement.

Coronal structures (left) disappeared in the image after the CME (right). Image: ISRO

Aditya-L1 also discovered that the temperature around the region of CME is enhanced by about 30 percent and this region becomes more turbulent during the event. Besides, the Sun’s dynamic magnetic field also gets more active during such eruptions and is the cause for more turbulence. ISRO says that the velocity of plasma was recorded at nearly 25 km per second during the CME.

Lastly, the experts also discovered that the plasma moves away from the observer during the CME event, a result of its deflection due to the solar magnetic field.

“This finding shows that solar magnetic forces can influence the direction of propagation of the ejected plasma as it moves in the inter-planetary space,” ISRO stated.

It added that the understanding of such deflections of the ejected plasma is important for the prediction of how CME evolves upon leaving the Sun and travelling through the solar system.

ALSO SEE: ISRO’s Releases Pictures Of Monster Sunspots Captured By Aditya-L1

(Image: ISRO)



Source link

Continue Reading

Trending

Copyright © 2023 Dailycrunch. & Managed by Shade Marketing & PR Agency