Perplexity Computer did six months of work in two hours. Here is our honest view.

Perplexity Computer completed a full competitive research, brand strategy, and design system build in before most of our morning coffees had kicked in. Before you get excited, here is what that actually means, and what it does not.

A laptop on a white desk showing the Perplexity Computer interface, with a green plant in the background.

This morning I handed Perplexity Computer a brief that would normally take a team of 20 people the better part of six months. Competitive research, market analysis, brand positioning, a full brand system, document templates, a website, brand guidelines. The whole thing and less than two hours later, it was done.

I have been working in and around AI for long enough to be sceptical of my own excitement. So let me be precise about what happened, because the nuance matters more than the headline.

What is Perplexity Computer

Perplexity is known as a search engine, and a good one. But Computer is something different. It is an agentic tool that takes a complex brief, breaks it into subtasks, runs them in parallel using sub-agents, and assembles the outputs into structured deliverables. Think of it less as a chatbot and more as a capable analyst with access to the web, a brief, and the ability to keep working until the job is done.

What made last week different was the scope of what it handled in a single workflow: live competitive intelligence, strategic synthesis, brand positioning, visual identity logic, and document production. Not sequentially. Simultaneously.

The obvious comparison here is Claude Cowork, Anthropic's desktop agent, and it is one I want to make properly. I cannot yet, because Cowork still does not run on my machine, which is a couple of months old. That is a frustration worth naming: when you are charging top-tier licence fees, prioritising a narrow slice of hardware configurations while paying customers sit on the sidelines feels like the wrong call. I will do a proper head-to-head as soon as I can actually get my hands on it.

The honest version of the story

Here is where I have to be careful with the six-months-and-a-team-of-twenty claim, because it is true and also incomplete.

It is true in the sense that the volume of output, and the coordination required to produce it, would genuinely have taken a significant team a significant amount of time working in the traditional way. Research alone would have been weeks. Brand strategy, more. Design system documentation, more still.

It is incomplete in three important ways. First, I still had to quality-check everything. The tool does not replace judgement; it replaces time. Second, the outputs needed directing. Perplexity Computer is not a mind-reader. It executed well because it was given a clear brief with genuine intent and structured instructions. Rubbish in, rubbish out still applies. Third, there were things it got slightly wrong, things I caught and corrected, and things I made better decisions about because I know the context it does not.

So the headline is real. The fine print is: a human still ran the show.

Why most people will not get these results

The gap between what these tools can do and what most people actually get from them is enormous, and it is almost entirely a prompting problem.

Perplexity Computer, like most agentic AI tools, has two jobs at once: it shapes what it searches for, and it shapes how it assembles the answer. Get either wrong and you get scattered, generic output that looks superficially plausible and is fundamentally useless.

The instructions that made last week work were not complicated, but they were deliberate. Clear intent, defined scope, named deliverables, and explicit criteria for what good looks like. Uncertainty handling baked in, so the tool tells you when it does not know something rather than confidently making it up.

We have built a framework for writing Perplexity instructions that covers exactly this. The short version: treat it like handing work to a capable analyst, not chatting to a search bar. Write for retrieval and synthesis together. Give it a definition of done. Tell it what to do when the evidence is thin.

What this changes, and what it does not

If you are a founder, a small agency, or a lean team trying to do work that previously required headcount you do not have, this class of tool is genuinely significant. The work that used to gate your ambitions, the research you could not afford to commission, the brand work that was always on the to-do list, the documentation that never got written, that work is now within reach.

What it does not change is the need for someone who knows what they are doing to be in the loop. These tools do not bring strategic clarity. They do not know your clients, your constraints, or the context you have spent years building up. When well directed, they execute at a level most teams cannot match. That directing part is still a human job.

This is exactly why we keep saying governance before tools. Not because AI is dangerous, but because the quality of what comes out is entirely determined by what goes in, and what goes in is entirely determined by the human running the process.

Want to get results like this?

We work with founders and small teams to build the AI fluency that makes this kind of productivity real, not accidental. If you want to know where to start, get in touch.

And if you want the Perplexity instructions framework, ask us for it hello@kintal.co.

Next
Next

Why AI is accelerating your team’s mess (and how to stop it)