The Diagnostic
How to Choose a B2B Consultant in 2026 — 5 questions to ask cover

How to Choose a B2B Consultant in 2026 (5 Questions to Ask Before You Sign)

By Dancho Dimkov11 min read

The B2B consulting industry is unusually good at hiding its failures. Five questions to ask any consultant before you sign — and what their answers tell you about how the engagement will actually go.

The B2B consulting industry is unusually good at hiding its failures. Bad engagements don't get publicised. Disappointed clients quietly stop returning calls rather than write angry case studies. Most "consultant marketing" is selection bias — the wins shouted from the rooftops, the losses buried in NDAs.

This is a problem for SME founders, because it means the public information available about choosing a consultant is almost entirely promotional. You read polished testimonials, scan sleek websites, sit through carefully orchestrated discovery calls, and then sign a contract worth €15K, €50K, or €200K based on impressions rather than evidence.

There's a better way. It takes one conversation, five questions, and the willingness to listen carefully to the answers. This guide walks through what to ask, why each question matters, and what to do with what you hear.

Why this guide exists

SMEs lose more money to bad consulting engagements than to almost any other operational mistake of similar size. Not because consultants are uniquely dishonest — most aren't — but because the match between consultant and client is hard to evaluate up front, and the cost of a mismatch is enormous.

A typical bad engagement looks like this: the consultant arrives with a standard methodology, applies it to your business with minor cosmetic adjustments, produces a deck, runs a few workshops, leaves you with a 90-day plan that mostly restates what you already knew, and invoices €40K. Six months later you've implemented 20% of the plan, the rest is gathering dust on a shared drive, and your business is roughly where it was — except €40K poorer and a quarter of your team has spent considerable hours sitting in workshops they didn't believe in.

The signs of a bad engagement are visible before it starts, if you know what to listen for. The five questions below are the ones I've found most reliable for separating consultants who will actually move your business forward from those who will perform consulting at you.

I'm a B2B consultant myself, so take what follows with the appropriate skepticism. But also: I'd rather lose business to a competitor who genuinely fits your needs than win business I shouldn't have, deliver a mediocre engagement, and damage my reputation. The questions below will lose me deals too. That's the right outcome.

Question 1: "Show me a recent engagement that failed — what went wrong?"

Why this question matters: Anyone who has done real consulting work for more than two or three years has had at least one engagement that didn't deliver what the client wanted. If a consultant tells you they've never had a failed engagement, they're either inexperienced, dishonest, or have been so cautious in client selection that they're never doing meaningful work.

What you're listening for: specific, recent, self-aware.

A good answer sounds like this: "Last year we worked with a 30-person logistics company. The diagnostic identified that their biggest issue was salesforce productivity, not their CRM as they'd assumed. We built a sales operations layer over four months. The engagement technically delivered what was scoped — but the founder didn't actually want sales discipline, he wanted a tool to blame. Within six months he'd reverted to the old habits. We learned to test for executive sponsorship more carefully in the diagnostic phase. Now we ask three specific questions in the first meeting that would have flagged this."

That answer has four things going for it:

  • Specific — real industry, real size, real timeline.
  • Recent — within the last 12–18 months, not a vague "early in our practice."
  • Honest about the consultant's own role — they don't blame the client entirely.
  • Translated into a process change — they fixed the upstream issue, not just complained about it.

A bad answer sounds like: "We don't really have failures, but sometimes clients aren't ready to do the work," or "Every engagement is a success in different ways." These are red flags — either lack of experience or lack of self-awareness, both expensive when you're the one paying.

Question 2: "How do you decide what NOT to recommend?"

Why this question matters: Consultants with one tool see one type of problem. If a firm's revenue depends on selling implementation services, they will, consciously or not, find implementation-shaped problems in your business.

The question tests for methodology vs. salesmanship. A consultant with a real methodology can explain how they distinguish between problems that need their help and problems that don't — because their methodology has a frame that excludes some things. A consultant whose methodology is "we figure it out" will recommend whatever they sell, every time.

A good answer references explicit criteria: "We don't recommend AI implementation if a company hasn't first stabilised its core processes — we've watched too many AI projects fail because they were layered on top of broken workflows. So if our diagnostic shows process maturity below a certain threshold, we tell the founder to fix the foundations first. That sometimes means we walk away from a six-figure engagement because the timing is wrong."

A bad answer is generic: "We tailor every recommendation to the client's needs," or "We only recommend what's right for the business." These are tautologies — they don't tell you anything about how the decision actually gets made.

The follow-up to push on: "Can you give me an example of a recommendation you decided NOT to make for a recent client, and why?" If they can't produce one, they probably make every recommendation they could possibly bill for.

Question 3: "What does your engagement look like at month 3, month 6, and after we end?"

Why this question matters: This is the question that surfaces the dependency hustle — the engagement structure that subtly creates ongoing reliance on the consultant rather than building durable capability inside your team.

The dependency hustle is rarely intentional. It usually emerges from incentive misalignment: consulting firms are rewarded for billed hours, so they tend to design engagements where the deliverables only function as long as the consultant is involved. You end up with bespoke dashboards no one in your team can update, processes documented in shared drives only the consultant remembers exist, and "ongoing advisory retainers" that quietly extend forever.

A healthy engagement structure has visible capability transfer milestones.

A good answer sounds like: "At month 3, we're co-creating with your team — we lead, they participate. At month 6, the dynamic flips — your team leads, we coach. By the time the engagement ends, your operations manager is running the system independently, with all documentation and tooling owned by your company. About 30% of clients want to extend us as ongoing strategic partners; we structure those separately, with explicit scope and end dates, never as open-ended retainers."

That answer demonstrates an explicit transition plan, names a specific role on the client side that takes ownership, and addresses the after-engagement question without dodging it.

A bad answer is vague about the end: "We adapt to your needs as the engagement evolves," or "We typically continue as an ongoing partner — most of our clients keep us involved long-term." The first is non-committal; the second is the dependency hustle made explicit.

Question 4: "What does your firm walk away from?"

Why this question matters: Saying yes to everything is the same as saying no to expertise. A consulting firm that takes any client, any industry, any problem is either inexperienced, financially desperate, or both.

Real specialisation has visible boundaries. A consultancy that specialises in B2B SaaS won't do well with a manufacturing client. A firm built for $50M+ enterprises will be a poor fit for a 15-person agency. A team of strategists will be the wrong choice for a tactical execution problem. None of these are weaknesses — they're the natural consequence of being good at something specific.

A good answer is concrete and honest: "We don't take engagements with companies under 5 employees because the operational maturity isn't there yet to absorb our work. We don't take engagements where the founder isn't going to be personally involved — we've learned the engagement always fails when it's delegated to a middle manager. We don't take engagements in heavily regulated industries (financial services, healthcare) because we don't have the domain expertise to do them justice. We refer those out."

That answer tells you exactly when you're a fit and when you're not. It also signals confidence — a consultant willing to say "we're wrong for that problem" is usually better at the problems they do take.

A bad answer is universalist: "We work with companies across all industries and sizes," or "We adapt our approach to any context." This usually means they have one approach and apply it everywhere, with cosmetic adjustments.

Question 5: "How do you measure success on this engagement, and who decides?"

Why this question matters: This is the question that exposes the difference between outcome-based and activity-based consulting.

Activity-based consulting measures success by what was delivered: number of workshops run, decks produced, hours invoiced, milestones marked complete. Outcome-based consulting measures success by what changed in your business: revenue movement, cost reduction, decision speed, employee retention, customer satisfaction. The first is easier to commit to and harder for you to challenge. The second is harder to commit to and the only thing that actually matters.

A good answer ties success to your business outcomes, not theirs: "At engagement kickoff, we agree on three to five business metrics that should improve as a result of our work — typically time-to-decision, revenue per employee, customer churn, or specific operational KPIs. We measure baseline at week one and target improvements at engagement end. You decide whether we hit those targets. If we don't, we don't bill the final retainer payment. We've eaten that cost three times in the last four years and it's the right discipline."

That answer ties consultant compensation to client outcome — which is unusual in B2B consulting. It also names you as the decision-maker on success, not the consultant grading their own homework.

A bad answer is process-focused: "We measure success by the quality of the deliverables and the satisfaction of the leadership team," or "Success is when your team is using the systems we've built." These can be true without your business actually being better off.

If a consultant can't or won't tie their definition of success to your business metrics, they're either not confident their work will move the metrics, or they don't want to be measured against an outcome they can't fully control. Either reason is a problem.

Red flags to watch for during the sales process

Beyond the five questions, the way a consulting firm sells to you tells you a lot about how they'll deliver to you. Watch for these:

  • The proposal arrives within 48 hours of the discovery call. Speed is a virtue, but not in proposal-writing. A serious proposal requires actually thinking about your situation. A 48-hour proposal is almost certainly a templated one with your name swapped in.
  • The scope is vague — "phase 1 will define phase 2." This is the bait-and-switch model: lock you in with a small initial scope, then expand once you're committed. A good consultant scopes the whole engagement up front, even if the later phases have ranges and contingencies.
  • Testimonials don't include metrics. "Working with [Firm] was transformative for our business" is not a testimonial. "We grew revenue 40% in 18 months and reduced operational headcount by 25%" is a testimonial.
  • They can't produce three reference clients you can call without supervision. Healthy firms have happy clients willing to talk to prospects. If every reference call is heavily managed or limited to specific people, the firm is filtering for the rare wins.
  • The engagement requires you to commit to 12+ months upfront. Some engagements do require long commitments — but you should be able to test the relationship for 90 days first.

Green flags that signal a strong fit

Conversely, here's what tells you a consulting firm is worth serious consideration:

  • They suggest a paid diagnostic or pilot before the main engagement. A small, scoped first project (€2K–€5K) lets both sides test the relationship before committing to something large.
  • They have a written methodology you can read about, not just a "philosophy." Look at their site, their blog, their books. Can you understand how they work without being on a sales call?
  • They tell you what they're not going to do, not just what they are. Specificity about scope boundaries is a sign of experienced project management.
  • The senior person you meet on the sales call is the same senior person who will run your engagement. Confirm in writing who is actually doing the work.
  • They give you something useful in the discovery call without being asked. A genuine consultant will share an observation, a framework, or a question that's helpful to you whether you hire them or not. Salespeople won't.

What to do AFTER you've shortlisted

Assuming you've identified two or three consulting firms that pass the five questions and the red/green flag checks, here's the process for the final decision:

1. Reference checks — but ask the right questions

Most reference calls produce useless information because the questions are too generic ("Did you enjoy working with them? Would you recommend them?"). Instead ask:

  • "What's the one thing they did better than you expected?"
  • "What's the one thing they did worse than you expected?"
  • "Did the engagement deliver on what was promised in the original proposal? If not, where was the gap?"
  • "Six months after the engagement ended, was your team still using what was built? If not, why not?"
  • "If you could go back and structure the engagement differently, what would you change?"

These questions get past the polite reference call and into the actual experience. The fifth question is particularly diagnostic — it surfaces what the client now wishes they'd insisted on.

2. Run a paid trial before the main engagement

A €2K–€5K diagnostic, audit, or first-project pilot tells you more in three weeks than three months of sales conversations. You see how the firm actually works, how they communicate under pressure, whether the senior person stays involved, and whether their methodology produces useful output.

If a firm refuses to do a paid trial, that's a red flag. If they offer one cheerfully and structure it well, that's a strong signal.

3. Negotiate the structure of the main engagement carefully

Once you've decided to proceed, the contract structure matters as much as the firm choice. Insist on:

  • Quarterly stop points — natural break-clauses where either side can end the engagement without penalty.
  • Defined deliverables per phase — not "we'll figure out month 4 in month 3."
  • Named senior personnel — the people you met on the sales call, written into the contract.
  • Outcome-tied compensation — at minimum, a portion of fees tied to the business metrics you agreed on in Question 5.
  • Capability transfer milestones — written into the contract, with specific names of your team members who take ownership at each phase.

A note on price

The B2B consulting market in 2026 has price points ranging from €50/hour freelancers to €5,000/day strategy partners. Cheap consultants are usually cheap for a reason; expensive consultants are not always worth what they charge. Price alone is the worst possible criterion for selection.

Some rough benchmarks for SME-scale engagements in the European market:

  • Diagnostic / audit (3–6 weeks): €2,000–€10,000
  • Focused implementation engagement (2–4 months, one functional area): €15,000–€50,000
  • Comprehensive operating system installation (6–12 months): €40,000–€150,000
  • Ongoing fractional executive (1–2 days/week): €4,000–€12,000/month

Anyone significantly under these ranges is either inexperienced, desperate, or planning to scope-creep aggressively. Anyone significantly over them is either elite-tier (rare) or relying on brand premium that may not translate to your specific situation.

How we answer the five questions ourselves

In the spirit of practising what I'm preaching, here's how the BusinessPulse OS team answers the five questions:

  1. Failed engagements: Yes — most recently a Q3 2025 engagement with a 40-person services company where we diagnosed the right problems but the founder refused to involve the executive team, and the implementation stalled at month 4. We've since added executive-team involvement as a non-negotiable in our discovery process.
  2. What we don't recommend: We don't recommend AI implementation before process maturity is in place. We don't recommend Business Build before Business Pulse. We don't recommend Business Growth without product-market fit. We routinely turn down work that doesn't sequence properly.
  3. Engagement shape over time: Diagnostic is consultant-led (30 days). Implementation phase 1 is co-created (month 1–2). Phase 2 is client-led with our coaching (month 3+). End state: your team owns and runs everything we built. About 25% of clients continue as strategic advisory; 75% finish and go run what we built together.
  4. What we walk away from: Companies under 5 employees. Companies in heavily regulated industries (we lack the domain depth). Companies where the founder isn't personally committed to the engagement. US-only companies (we serve them, but EOS is often a better fit for that market — see our comparison guide).
  5. How we measure success: At kickoff we agree on three to five business metrics. We measure baseline at week one and at engagement end. The client decides whether we hit them. We've structured outcome-tied final invoices on three engagements in the last 18 months and forfeited the final payment once.

If those answers match what you're looking for in a consulting partner, run the diagnostic. If they don't, this guide should help you find a firm whose answers do.

Frequently asked questions

Is hiring a consultant cheaper than hiring a full-time executive?

Often yes, in absolute terms — but it depends on duration. A €50K six-month consulting engagement is roughly the same cost as one quarter of a fully-loaded €120K/year executive hire. The consulting engagement gives you specialised expertise for a defined period; the full-time hire gives you ongoing capability. The right answer depends on whether you have a specific finite problem (consultant) or an ongoing function that needs leadership (hire).

Should I hire one consultant or a firm?

A solo consultant gives you the best person all the time, with no senior-junior dilution, and usually lower cost. A firm gives you broader skill coverage, redundancy if your primary contact leaves, and access to more methodologies. For most SME engagements, a small firm or a senior solo consultant is the sweet spot — large firms typically over-engineer the engagement for SME scale.

How do I know if a consultant is genuinely senior or just a salesperson dressed up?

Ask about specific past engagements in detail. A real senior consultant can talk for thirty minutes about the technical and emotional dynamics of a single engagement they led. A salesperson will pivot back to high-level positioning within a few minutes. The depth of detail a person can offer about their own past work is the single best signal of seniority.

What's the difference between a consultant and a fractional executive?

A consultant is an external advisor who recommends and often implements alongside you. A fractional executive is a part-time leader who actually holds a role on your team (fractional CFO, COO, CRO). Consultants are usually better for projects with defined scope; fractional executives are usually better for ongoing functional leadership you can't yet justify hiring full-time for.