Can we build the dog?

What resource-constrained teams need to ask before writing a line of code

Can we build the dog?
Photo by Louise Smith / Unsplash

“Will the dog hunt?”

My sneakers squeaked on the concrete floor. Twenty entrepreneurs in black hoodies looked up at me, taking notes. This room had been a working garage once; now, we ceremonially opened the garage door to let in new cohorts of early-stage media startups with the potential to change media for good. Outside, the San Francisco traffic honked and screeched.

We were midway through the bootcamp: the week-long course at the beginning of the accelerator that aimed to teach startups the fundamentals of human-centered venture design. We’d taken them out of their comfort zone to help them use journalistic skills to understand who they were building for and why. We’d helped them to think about how to effectively tell the story of their business in a way that helped them sharpen their underlying strategy.

And now I was trying to explain feasibility.

I echoed Corey Ford, the Managing Director, who had laid out the groundwork in the days before. Repetition was our friend.

Desirability, I explained, is your user risk: are you building something that meets a real person’s needs? Will the dog hunt?

Viability, in turn, is your business risk: if you are successful, can your venture succeed as a profitable, growing business? We stretched the metaphor a little bit here: will the dog eat the dog food?

And now it was time to explore Feasibility: can you provide this service with the team, time, and resources reasonably at your disposal? I leaned in conspiratorially and vamped: can we build the dog?

It was the best job I ever had: using my experience as a founder, an engineer, and a storyteller to support teams that were genuinely trying to make a difference. People who went through Matter have gone on to help countless newsrooms succeed by being more empathetic and product-minded; some have left media and even gone on to build hospitals.

I went through Matter as the founder of Known in 2014 and came back to support other founders a few years later. Since then, how I think about feasibility has completely changed.

The build vs the long, wagging tail

The center of gravity for feasibility, at least in my mind, used to be the build stage. How do you build the initial version of a tool or a service that provides a minimum desirable experience to meet your user’s need?

Startups also need to consider how to make it scale so that you can address a larger potential number of users with the time, team, and resources potentially at your disposal. If you can’t get there with the resources you have, could you get there with investment dollars you could realistically raise?

In a newsroom or other organization the formula is a little bit different. You’re probably not raising money for a specific tool or a service — although, sometimes, grant funding or funding from a corporate parent is available for certain things. But you’re most often asking whether you can provide the service or tool with the time, team, and resources currently at your disposal. Often, your time is limited, your team is small, and your resources are meagre. You have to make stark tradeoffs.

In both contexts, you don’t want to waste time spinning your wheels building solutions to problems other people have already solved. I’ve already written about building vs buying for newsrooms:

Newsroom tech teams are like startups in that they’re running with limited resources and constantly trying to assess how they can provide the most value. Back when I was Director of Investments at Matter Ventures, I advised them to spend their time building the things that made them special — their differentiating value — and using the most boring, accepted solution for everything else.

It's a rule of thumb that works universally: build what makes you special and buy the rest. But the critical difference is that, in newsrooms, what makes you special is the journalism that software enables, not the software itself.

The cost of building something new has fallen through the floor over the last decade. Developer tools have become more powerful, numerous, and freely available. Open source has exploded with libraries that can help you get to an initial version much faster.

Enter AI. Almost without warning, AI-enabled tools dramatically expanded what a resource-strapped team can create. It’s a genuine sea change. The more founders and senior engineers I speak to who are actively using these tools, the more stories I hear about accelerated development. People are building smaller tools that would have taken many sprints in less than a day; founders are building entire startups that might have taken six months in less than one.

But all code needs to be maintained. There are bugs, libraries need to be upgraded, underlying platform changes introduce security flaws and incompatibilities. Changes in business needs mean that tools and services need to be adjusted. All of those things add up to a maintenance overhead that comes with introducing any new tool or service. If we rapidly build more and more software, that maintenance overhead accumulates at speed. Even if we have the discipline to keep our technical footprint small, we’re not absolved from doing what has to be done to keep everything running.

When we consider feasibility, the center of gravity is no longer in building the thing. It’s supporting it.

A shared rubric reduces risk

The dynamics may have changed but every team still needs to make a bet about whether we can build and support a project before it takes it on. If something is obviously not feasible, the team shouldn’t do it. On the other hand, if a team doesn’t have a clear, shared understanding of how to assess feasibility, it can become an easy way for someone to subjectively shut down a project for arbitrary reasons. Without a clearly shared understanding of risk, the idea of risk can be poison.

So that’s what we need: a shared rubric for assessing the feasibility of a project. Our assessments won’t always be right, and we always learn new things about a project in the course of building or supporting it. But while complete certainty is hard to achieve, this will at least provide directional information about whether we can do it.

There are existing frameworks, but they’re mostly designed for large enterprise environments: instead of giving you a directional gut check, they produce documents used in commissioning vendors, justifying budgets to executives, and satisfying governance processes. TELOS — Technical, Economic, Legal, Operational, and Scheduling — feasibility tests are very broad and don’t consider technology alone. PIECES examines whether a proposed project will improve the status quo across Performance, Information, Economics, Control, Efficiency, and Services. Both are useful to understand, but also not quite what most time-strapped contexts demand. We need something scrappier.

A prototype rubric for feasibility

Here are some questions a team can ask themselves. Not only are they useful in themselves, but you can use them for alignment: if a product manager ranks a factor with a low score but a senior engineer on the team ranks it with a high one, you know there’s a problem that you need to dig into.

Each of these questions can lead to its own targeted discussion. The purpose of the rubric is not to be a thought-ending exercise: it’s to align a team around what’s actually important to consider, and open up conversations about any disagreements so that everyone can come to a consensus.

Each person on the team should run through the rubric — perhaps asynchronously — and then share their results with the group in a shared meeting.

These questions take our AI engineering context into account: questions about exploring new architectures are weighted lower than they would have been ten years ago.

1. The problem context (25 points)

How much of this project's scope is a black box? (10 points)
1 = We have built this exact thing before. 10 = This is completely new territory; we don't even know what questions to ask yet.

How much friction will our existing tech stack, legacy systems, or organizational quirks add to the build? (10 points)
1 = Greenfield project using our preferred stack; 10 = Navigating a maze of legacy spaghetti code or systems.

How fundamentally difficult is the core problem we are trying to solve? (5 points)
1 = A standard CRUD app; 5 = Uncharted algorithmic research.

2. Execution (15 points)

Here, the “development period” is tailored to your unit of project organization time on your product roadmap. On some teams, it’s a quarter; for others, it’s half a year. These questions are not meant to be considered at the sprint level.

How much of the team’s total capacity will the initial build consume? (10 points)
1 = We can spin this up in an afternoon; 10 = Consumes 100% of the engineering team's capacity for the development period.

How well do the skills required match the people we currently have? (5 points)
1 = We have deep expertise here; 5 = Requires learning new frameworks from scratch.

3. The long, wagging tail (60 points)

While most of these questions consider up-front issues, this section describes the ongoing overhead for a team. This represents risk: time a team spends working on maintaining an existing tool is time it can’t spend building anything new or maintaining other tools. Over time, without careful lifecycle management and brutal decision-making, a team’s bandwidth can disappear into ongoing maintenance.

How long are we committing to keep this system alive? (20 points)
1 = A disposable prototype or short-term event tool (weeks or months); 20 = A permanent, foundational system we expect to rely on for years.

How much of our team’s capacity will keeping this system alive consume in a typical month? (30 points)
1 = Set it and forget it; 30 = Requires daily babysitting, constant bug fixes, and continuous adaptation to upstream changes.

How heavily does this project rely on teams, platforms, APIs, or vendors that we do not control? (10 points)
1 = Fully self-contained; 10 = Dependent on unstable APIs, beta AI models, or restrictive third-party vendors.

4. The blast radius (40 points)

Because modern tools let small teams build powerful things quickly, the risk of deploying something dangerous or irreversible is higher. This category is weighted heavily to catch those risks.

How sensitive is the information that this tool handles? (20 points)
1 = Public, anonymous data; 20 = Handling highly sensitive PII, whistleblower documents, or financial data.

How catastrophic is it if we ship an imperfect, glitchy version? (10 points)
1 = We can ship it broken and iterate safely; 10 = Mission-critical; if it’s not perfect on day one, we burn trust or face legal ruin.

If we realize this was a mistake halfway through, how hard is it to undo? (10 points)
1 = A two-way door; we can easily turn it off; 10 = A one-way door; involves irreversible data migrations or permanent structural changes.

Once everyone has tallied their numbers, compare your total scores out of 140. This is nothing more than a temperature check: again, it should be considered a conversation-starter, not the final word.

Roughly, here’s how the scores break down:

0–45: Green light. This project is highly feasible. The initial lift is manageable, the ongoing tax on your team is low, and the blast radius if things go wrong is minimal. Build the dog.

46–95: Yellow light. This is the danger zone of hidden costs. You can probably build this, but the lifespan, ongoing maintenance, vendor dependencies, or security requirements will create a permanent drag on your team’s velocity. Before proceeding, ask yourselves: what existing project are we willing to sunset to make room for this new maintenance burden? What’s the opportunity cost, and is the lift to the organization worth it? While this rubric only considers feasibility, this is a good time to go back to desirability and make sure the juice is worth the squeeze.

96–140: Red light. This project is fundamentally infeasible with your current resources. The complexity is too high, the blast radius is too dangerous, or the multi-year maintenance load will simply sink your engineering team. If this project is absolutely vital to the business, you cannot build it scrappily: you need to buy an enterprise solution, hire specialists, or radically reduce the scope by choosing a much smaller problem to solve first.

Once again, this isn’t the be-all and end-all. For one thing, the rubric is not a simple average: pattern matters as much as the total. It’s worth checking for category dominance: if your scores are generally low but are much higher for the long, wagging tail, that doesn’t mean you have an unambiguous green light. These categories may also surface other conversations that aren’t cleanly captured by the rubric. But the first step towards shared understanding is building a structure to achieve understanding about — and hopefully this gets you some of the way there.

And please: talk to people. For more complicated projects, I always think it’s a good idea to speak to experts in order to validate your assumptions about feasibility. For any project, speaking to your users to make sure you’ve nailed desirability, and speaking to equivalent businesses to validate your viability assumptions, are crucial. The map is not the territory, and sometimes you need multiple maps.

Can we build the dog?

The beauty of a shared rubric isn’t that it automatically makes decisions for you. It’s that it forces a team to look at the exact same map. If a product manager scores the project a 40, but a senior engineer scores it a 105, you’ve found an area of disagreement that you need to explore. It’s far better to do that early, before you dive into complicated specification work or writing code.

In a world where AI and modern tooling make it dangerously easy to spin up new software, our ultimate constraint is no longer our ability to type code. It’s our capacity to care for the things we bring into the world. Saying "no" to a project with a massive, hidden maintenance burden isn't a failure of imagination; it is how you protect your team’s time so they can focus on the journalism, the community, or the core mission that actually makes your organization special.

Today, building the dog is the easy part. The real question is whether you have the time, energy, and resources to feed it, walk it, and take it to the vet for the next five years.

If you do, then by all means: let’s see if it hunts.