How we work

Four stages. Start at one.

Every stage is scoped, priced and delivered on its own. No lengthy onboarding, no six-figure retainer up front, no open-ended contract. The audit runs first because the audit is the thing that tells us whether any of the rest is worth doing.

01
Stage one

Discovery & Audit.

We embed alongside the team and map how the business actually runs, not from a single Zoom interview, from the work itself. We sit with the people doing the tasks, watch how work moves from input to output and identify every point where AI, automation, or a better tool choice could meaningfully change the economics.

The output is a complete, costed roadmap: every opportunity, ranked, with a build plan and a cost estimate attached to each. Nothing gets built in this stage. The point is to know exactly what is worth building and in what order, before any further commitment.

The audit is not opinion. It is observation backed by time data, tool data and an honest read on the team's current AI fluency. It finds the work you already do that AI can take on, the work you do not need to be doing at all and the tools you are paying for but not using.

What you receive at the end of Stage 01

  • Full workflow map. A visual diagram of every repeating process across the business, from daily operations through to client delivery and new business.
  • Time audit. Real hours tracked against each workflow type, showing where the recoverable time actually sits.
  • Tool audit. Every software subscription reviewed: what it costs, what is actually being used, what to keep, renegotiate, or cancel.
  • AI skills assessment. An honest read of the team's current comfort with AI, identifying where training will land hardest.
  • Opportunity register. Every opportunity for AI, automation, tooling, or process redesign, ranked by impact, effort and ROI, each with a build plan and cost estimate.
  • Training plan for Stage 02. Role-specific and matched directly to the capabilities we would build next.
02
Stage two

Training & Enablement.

Before we build anything custom, the team needs to be fluent in the AI tools that already exist. This is not a generic lunch-and-learn. It is hands-on, role-specific training built around your work, using the tools that best suit your stack and your people.

A team that already knows how to get the most out of off-the-shelf AI tools gets far more value from whatever we build for them next. The custom work lands properly because the people using it already think in the shape of the tools.

For smaller organisations, this can be as light as two workshops and a shared prompt library. For larger ones, it is a structured programme across departments with graduated levels and follow-up clinics.

What the training covers

  • Role-specific sessions. Training tailored to how each team member actually works, not one-size-fits-all.
  • Tools applied to real work. Whatever AI tools suit your business, used directly on live company work during the session.
  • Prompt engineering. How to get consistent, high-quality outputs; how to use one AI to instruct another; when to commit a prompt to a shared library.
  • Practical exercises. Every session uses real company work, not hypothetical examples. Skills are usable the next morning.
  • Baseline of competency established. The team reaches a consistent level of AI fluency before any custom build is handed over.
03
Stage three

Build Sprints.

With the audit complete and the team trained, we build what the roadmap identified. One build at a time. Each is a scoped project with a fixed price, a fixed timeline and a defined deliverable agreed before any code is written.

Builds are not limited to automations. Depending on what the audit surfaces, a sprint might produce an internal AI assistant for a specific team; a decision-support tool for a specific process; a data pipeline that makes your existing systems useful to AI; a custom integration between tools that do not talk to each other; or a purpose-built piece of software that replaces a bloated SaaS subscription.

Because the team already understands AI from Stage 02, the handover is faster, adoption is higher and the outputs are used correctly from day one. The builds that get shipped are the ones your people will actually run.

What a build sprint delivers

  • Scope document. What the build does, what it replaces, how long it takes, what it costs, what subscriptions it needs to run.
  • Live testing. Every build is tested with the actual team members who will use it, not in isolation.
  • Handover and documentation. Your team can operate and maintain the build without depending on Consilix day to day.
  • Training session. A hands-on session for every person whose role the build affects.
  • Outcome benchmark. Measured against the baseline captured during the audit, so you know what actually changed.
04
Stage four

Embedded Partnership.

Once several things are live, Consilix becomes your embedded AI capability: the function you do not need to hire for. We keep the live systems working, improve them as the team's needs evolve, build new capabilities as they are identified and keep you ahead of what is changing in AI and in your market.

Rolling monthly. Cancel any time. Priced to sit below the fully-loaded cost of a single in-house AI engineer and scoped to do the work of one; with the backing of a wider network if a specific build needs specialist hands.

What the retainer includes

  • System maintenance. API connections, prompt refinement, output quality monitoring across every live capability.
  • Monthly iteration. Continuous improvement based on team feedback and real usage patterns.
  • New builds. Additional capabilities added as new opportunities emerge, scoped and priced inside the retainer where they fit, or as separate sprints where they do not.
  • AI education. Keeping your team current as tools evolve: what is worth adopting, what to ignore.
  • Strategic positioning. Helping the business articulate its AI capability to clients, investors and in new business.
A note on scope

What Consilix does not do.

We do not sell off-the-shelf products. We do not run support centres; we do not operate call centres; we do not mass-produce customer-facing chatbots. We do not sell "AI strategy" as a deck-only deliverable where nothing is ever built.

If the honest answer to a process is "you do not need AI for this, you need a cleaner spreadsheet", that is what the audit will say. A significant part of the value of Stage 01 is being told what not to spend money automating.

05
Questions

The things most people ask before the call.

Where does AI actually help our kind of business?
Anywhere work repeats, anywhere information has to be found, filtered, summarised, or formatted and anywhere decisions sit in someone's head instead of in a system. The audit exists to find those points in your specific business. We do not arrive with a pre-built answer and assume it applies.
We already use AI tools. Why would we need you?
Using an AI tool is not the same as having an AI capability. The first gets individual employees moving faster. The second reshapes how the business works: shared prompts, integrated data, bespoke tools, decisions embedded in the systems your team already uses. The audit surfaces where the gap between the two is costing you most.
How do you handle our data and our clients' data?
We default to tools you already hold data-processing agreements with and use enterprise or business tiers that exclude training on your data. Where a build needs its own infrastructure, we host in a UK region and sign a DPA. Nothing client-identifiable leaves an approved path. A plain-English data page is available on request.
Will this integrate with our existing stack?
If a tool has an API, a webhook, or exports clean data, we can work with it. If it does not, the audit will say so and the recommendation will either be to change the tool or to work around it. Prior builds have integrated with standard office, CRM, finance, accounting, ticketing and bespoke database systems.
How do you work with larger engagements? Is it just you?
For smaller projects, I deliver directly. For larger ones, I assemble a specialist team from a curated network of freelance engineers, designers and consultants, run the engagement end to end and stay accountable for the outcome. You always know who is doing what and there is no handoff to people you have not met.
What does a full engagement typically cost?
The audit is a fixed fee scoped to your team size. Training scopes from the audit. A typical build sprint is £4,000 to £20,000 depending on complexity; a straightforward automation sits at the lower end, a fully integrated internal tool at the higher. The monthly retainer starts from £1,500. For most 50 to 200-person firms, a realistic first-year commitment sits between £30,000 and £100,000; most of it spent on builds that measurably return the money within six to nine months.
How long until we see something working?
The audit finishes within a few weeks depending on team size. The first build is usually live within four weeks of the audit being signed off. Where a quick-win is obvious, we can ship a small build in parallel with the audit so the model is proven early.
What happens if AI tools change in six months?
They will. The retainer exists partly for this reason. When a new model, tool, or price point makes an existing build cheaper, faster, or better, we migrate it. Every build is documented with the pattern, not the tool, so the substance survives a vendor swap.
Start with a conversation

Bring one question. We will walk through it.

Thirty minutes. No pitch. You leave the call knowing whether it is worth doing, what it would cost and whether Consilix is the right partner for it.

Book a call