What if you could interact with any piece of software using natural language? Imagine typing in a prompt and having AI translate the instructions into machine-comprehendable commands, executing tasks on a PC or phone to accomplish the goal that you just described?
That’s the idea behind Rabbit, a rebranding of Cyber Manufacture Co., which is building a custom, AI-powered UI layer designed to sit between a user and any operating system.
Founded by Jesse Lyu, who holds a bachelor’s degree in mathematics from the University of Liverpool, and Alexander Liao, previously a researcher at Carnegie Mellon, Rabbit is creating a platform, Rabbit OS, underpinned by an AI model that can — so Lyu and Liao claim — see and act on desktop and mobile interfaces the same ways that humans can.
“The advancements in generative AI have ignited a wide range of initiatives within the technology industry to define and establish the next level of human-machine interaction,” Lyu told TechCrunch in an email interview. “Our perspective is that the ultimate determinant of success lies in delivering an exceptional end-user experience. Drawing upon our past endeavors and experiences, we’ve realized that revolutionizing the user experience necessitates a bespoke and dedicated platform and device. This fundamental principle underpins the current product and technical stack chosen by Rabbit.”
Rabbit — which has $20 million in funding contributed by Khosla Ventures, Synergis Capital and Kakao Investment, which a source familiar with the matter says values the startup at between $100 million and $150 million — isn’t the first to attempt a layering natural language interface on top of existing software.
Google’s AI research lab, DeepMind, has explored several approaches for teaching AI to control computers, for example having an AI observe keyboard and mouse commands from people completing “instruction-following” tasks such as booking a flight. Researchers at Shanghai Jiao Tong University recently open sourced a web-navigating AI agent that they claim can figure out how to do things like use a search engine and order items online. Elsewhere, there’s apps like the viral Auto-GPT, which tap AI startup OpenAI’s text-generating models to act “autonomously,” interacting with apps, software and services both online and local, like web browsers and word processors.
But if Rabbit has a direct rival, it’s probably Adept, a startup training a model, called ACT-1, that can understand and execute commands such as “generate a monthly compliance report” or “draw stairs between these two points in this blueprint” using existing software like Airtable, Photoshop, Tableau and Twilio. Co-founded by former DeepMind, OpenAI and Google engineers and researchers, Adept has raised hundreds of millions of dollars from strategic investors including Microsoft, Nvidia, Atlassian and Workday at a valuation of around $1 billion.
So how does Rabbit hope to compete in the increasingly crowded field? By taking a different technical tack, Lyu says.
While it might sound like what Rabbit’s creating is akin to robotic process automation (RPA), or software robots that leverage a combination of automation, computer vision and machine learning to automate repetitive tasks like filing out forms and responding to emails, Lyu insists that it’s more sophisticated. Rabbit’s core interaction model can “comprehend complex user intentions” and “operating user interfaces,” he says, to ultimately (and maybe a little hyperbolically) “understand human intentions on computers.”
“The model can already interact with high-frequency, major consumer applications — including Uber, Doordash, Expedia, Spotify, Yelp, OpenTable and Amazon — across Android and the web,” Lyu said. “We seek to extend this support to all platforms (e.g. Windows, Linux, MacOS, etc.) and niche consumer apps next year.”
Rabbit’s model can do things like book a flight or make a reservation. And it can edit images in Photoshop, using the appropriate built-in tools.
Or rather, it will be able to someday. I tried a demo on Rabbit’s website and the model’s a bit limited in functionality at the moment — and it seems to get confused by this fact. I prompted the model to edit a photo and it instructed me to specify which one — an impossibility given that the demo UI lacks an upload button or even a field to paste in an image URL.
The Rabbit model can indeed, though, answer questions that require canvassing the worldwide web, a la ChatGPT with web access. I asked it for the cheapest flights available from New York to San Francisco on October 5, and — after about 20 seconds — it gave me an answer that appeared to be factually accurate, or at least plausible. And the model correctly listed at least a few TechCrunch podcasts (e.g. “Chain Reaction”) when asked to do so, beating an early version of Bing Chat in that regard.
Rabbit’s model was less inclined to respond to more problematic prompts such as instructions for making a dirty bomb and one questioning the validity of the Holocaust. Clearly, the team’s learned from some of the mistakes of large language models past (see: the early Bing Chat’s tendency to go off the rails) — at least judging by my very brief testing.
The demo model on Rabbit’s site, which is a bit limited in functionality.
“By leveraging [our model], the Rabbit platform empowers any user, regardless of their professional skills, to teach the system how to achieve specific goals on applications,” Lyu explains. “[The model] continuously learns and imitates from aggregated demonstrations and available data on the internet, creating a ‘conceptual blueprint’ for the underlying services of any application.”
Rabbit’s model is robust to a degree to “perturbations,” Lyu added, like interfaces that aren’t presented in a consistent way or that change over time. It simply has to “observe,” via a screen-recording app, a person using a software interface at least once.
Now, it’s not clear just how robust the Rabbit model is. In fact, the Rabbit team doesn’t know itself — at least not precisely. And that’s not terribly surprising, considering the countless edge cases that can crop up in navigating a desktop, smartphone or web UI. That’s why, in addition to building the model, the company’s architecting a framework to test, observe and refine the model as well as infrastructure to validate and run future versions of the model in the cloud.
Rabbit also plans to release dedicated hardware to host its platform. I question the wisdom of that strategy, given how difficult scaling hardware manufacturing tends to be, the consumer hostileness of vendor lock-in and the fact that the device might have to eventually compete against whatever OpenAI’s planning. But Lyu — who curiously wouldn’t tell me exactly what the hardware will do or why it’s necessary — admits that the roadmap’s a bit in flux at the moment.
“We are building a new, very affordable, and dedicated form factor for a mobile device to run our platform for natural language interactions,” Lyu said. “It’ll be the first device to access our platform … We believe that a unique form factor allows us to design new interaction patterns that are more intuitive and delightful, offering us the freedom to run our software and models that the existing platforms are unable to or don’t allow.”
Hardware isn’t Rabbit’s only scaling challenge, should it decide to pursue its proposed hardware strategy. A model like the one Rabbit’s building presumably needs a lot of examples of successfully completed tasks in apps. And collecting that sort of data can be a laborious — not to mention costly — process.
For example, in one of the DeepMind studies, the researchers wrote that, in order to collect training data for their system, they had to pay 77 people to complete over 2.4 million demonstrations of computer tasks. Extrapolate that out, and the sheer magnitude of the problem comes into sharp relief.
Now, $20 million can go a long way — especially since Rabbit’s a small team (9 people) currently working out of Lyu’s house. (He estimates the burn rate at around $250,000.) I wonder, though, whether Rabbit will be able to keep up with the more established players in the space — and how it’ll combat new challengers like Microsoft’s Copilot for Windows and OpenAI’s efforts to foster a plugin ecosystem for ChatGPT.
Rabbit is nothing if not ambitious, though — and confident it can make business-sustaining money through licensing its platform, continuing to refine its model and selling custom devices. Time will tell.
“We haven’t released a product yet, but our early demos have attracted tens and thousands of users,” Lyu said. “The eventual mature form of models that the Rabbit team will be developing will work with data that they have yet to collect and will be evaluated on benchmarks that they have yet to design. This is why the Rabbit team is not building the model alone, but the full stack of necessary apparatus in the operating system to support it … The Rabbit team believes that the best way to realize the value of cutting-edge research is by focusing on the end users and deploying hardened and safeguarded systems into production quickly.