Find Growth Areas and Build Evidence
The promotion conversation ended with "not yet" -- but nobody could point to what specific evidence would change the answer. The feedback felt subjective, and you left without a clear picture of what to work on or how to show progress. This guide walks you through finding the specific gaps between where you are and where you need to be, then building a visible evidence record that grounds the next conversation in facts.
Two products work together in this workflow. Guide is an AI agent that reasons about your organization's engineering standard -- it identifies gaps and gives career advice grounded in your actual skill and behaviour definitions. Landmark reads your engineering artifacts (pull requests, design documents, code reviews) and shows which standard markers your work has evidenced. Together they replace subjective impressions with specific, standards-grounded answers.
Prerequisites
This guide assumes you have completed the setup for both products:
- Getting Started: Guide for Engineers -- install Guide, run codegen, authenticate with Anthropic, process your standard data, and start the service stack.
- Getting Started: Landmark for Engineers -- install Landmark and confirm access to Map's activity layer.
You should also know your role coordinates (discipline, level, and track). If you have not identified those yet, work through See What's Expected at Your Level first -- that guide covers finding your coordinates and understanding what your current level expects.
Ask Guide what's missing
Start by asking Guide what the gap looks like between your current level and your target. Guide reads your organization's skill definitions, behaviour expectations, and markers -- not generic career advice -- and returns specific areas grounded in your standard.
Start Guide and ask a growth question. Lines starting with
> are prompts you type inside the Guide session:
npx fit-guide
> What should I focus on to move from J060 to J070 in Software Engineering?
Guide responds with the specific skills and behaviours that change between levels, drawn from your standard data:
Moving from Engineer (J060) to Senior Engineer (J070) in Software Engineering
involves 12 changes across skills and behaviours.
**Biggest skill jumps:**
- **Architecture Design** — Working to Practitioner. At Practitioner, you lead
architecture for a product or platform area, defining module boundaries and
integration patterns. The key shift is from designing within a given
architecture to shaping the architecture itself.
- **Code Review** — Working to Practitioner. At Practitioner, you mentor others
on review quality and define review standards for your area.
**Key behaviour shifts:**
- **Think in Systems** — Practicing to Role Modeling. At Role Modeling, you
shape how the team approaches problems systemically, not just apply systems
thinking yourself.
- **Own the Outcome** — Developing to Practicing. This means consistently
demonstrating ownership in daily work, not just when prompted.
Focus on Architecture Design and Think in Systems first — those represent the
largest shifts in scope and autonomy.
You can also pipe a question directly without entering the interactive session:
echo "What markers define practitioner-level architecture design?" | npx fit-guide
Guide will reference the specific markers from your capability YAML -- the same observable indicators that Landmark uses to match evidence.
Go deeper on specific gaps
Once you know the broad gap, drill into specific areas:
> What does practitioner-level code review look like in our standard?
> How is Think in Systems different at Role Modeling vs Practicing?
> Which of my core skills have the biggest jump to J070?
Each answer is grounded in your organization's definitions. Two people asking the same question get the same foundational answer because the source of truth is shared.
Check your evidence record
Guide tells you what to work on. Landmark tells you what your engineering record already shows. Before you start building new evidence, see where you stand.
See which markers your work has evidenced
npx fit-landmark evidence --email you@example.com
Evidence
architecture_design: 3 matched, 1 unmatched
[matched] Designs services with clear API boundaries
rationale: PR #142 introduced a new service boundary with documented...
[matched] Documents trade-offs in design decisions
rationale: Design doc for auth migration weighed three approaches...
[matched] Defines module boundaries for a bounded domain
rationale: RFC-019 established module boundaries for the billing...
[unmatched] Leads architecture for a product or platform area
code_review: 2 matched, 0 unmatched
[matched] Provides actionable feedback on design intent, not just style
rationale: Review of PR #198 identified a coupling risk between...
[matched] Catches cross-cutting concerns during review
rationale: Review of PR #215 flagged a missing audit trail...
Evidence covers 18/24 artifacts.
Each row shows the artifact, the marker it matched, and Guide's rationale for the match. Filter by skill to focus on a specific gap:
npx fit-landmark evidence --skill architecture_design --email you@example.com
Check promotion readiness
See a checklist of next-level markers -- which ones you have already evidenced and which are still outstanding:
npx fit-landmark readiness --email you@example.com
Readiness: you@example.com (J060 -> J070)
Architecture Design (practitioner):
[x] Designs services with clear API boundaries (PR #142)
[x] Documents trade-offs in design decisions (design-doc-auth)
[x] Defines module boundaries for a bounded domain (RFC-019)
[ ] Leads architecture for a product or platform area
Code Review (practitioner):
[x] Provides actionable feedback on design intent, not just style (PR #198)
[ ] Mentors others on review quality
[ ] Defines review standards for the area
5/7 markers evidenced.
Missing: Leads architecture for a product or platform area; Mentors others
on review quality; Defines review standards for the area
Without --target, readiness checks against the next
level above your current level. To check against a specific level,
add the --target flag:
npx fit-landmark readiness --email you@example.com --target J080
The missing markers become your concrete growth plan -- each one describes an observable action you can work toward.
View skill coverage
See how complete your evidence record is across all expected skills:
npx fit-landmark coverage --email you@example.com
Evidence coverage for You (you@example.com)
18/24 artifacts interpreted (75.0%)
By type:
pull_request 12/15 interpreted
design_document 4/5 interpreted
code_review 2/4 interpreted
Coverage shows evidenced artifacts versus total expected markers. Low coverage in a specific skill area tells you where to focus your work.
Build evidence in the gaps
Now you know exactly which markers are missing. The goal is not to game the checklist -- it is to do work that naturally demonstrates growth in those areas.
Use Guide to plan your approach
Ask Guide how to build evidence for a specific missing marker:
npx fit-guide
> How can I build evidence for "Leads architecture for a product or platform
> area"? I'm at working-level Architecture Design and need practitioner.
Guide will suggest concrete activities grounded in your standard -- not generic advice. It knows what your organization defines as practitioner-level architecture work and can recommend activities that would produce artifacts Landmark can later interpret.
Look up marker definitions directly
To see the full set of markers defined for a skill at any
proficiency level, use Landmark's marker command:
npx fit-landmark marker architecture_design
This shows all markers across all proficiency levels. Filter to a specific level:
npx fit-landmark marker architecture_design --level practitioner
Markers are the observable indicators defined in your capability YAML. They describe what someone at that proficiency level does in practice -- not what they know in theory. Every marker Landmark checks against is visible here.
Track progress over time
As you do the work and your artifacts accumulate, track whether your evidence record is growing:
npx fit-landmark timeline --email you@example.com
Growth timeline for you@example.com
2025-Q1 architecture_design working
2025-Q2 architecture_design working
2025-Q3 architecture_design practitioner
2025-Q1 code_review working
2025-Q2 code_review working
The timeline shows the highest evidenced proficiency level per skill per quarter. A level appearing for the first time tells you the evidence record has caught up to your growth. Filter by skill to focus on one area:
npx fit-landmark timeline --email you@example.com --skill architecture_design
Verify
You have reached the outcome of this guide when you can answer these questions:
- What specific skills and behaviours need to grow? You have asked Guide about the gap between your current level and your target, and you can name the areas with the largest shifts.
-
Where does your evidence record already show strength?
You have run
npx fit-landmark evidenceandnpx fit-landmark readinessand can identify which markers are evidenced and which are still missing. -
What does the next level look like in practice?
You have looked up specific marker definitions with
npx fit-landmark markerand can describe the observable actions your target level expects. -
Is your evidence growing over time? You have
checked
npx fit-landmark timelineand can see whether recent work is producing visible movement.
If any of these are unclear, revisit the relevant step. The readiness checklist is the most direct measure -- when the missing markers from your first run start showing as evidenced, you are making progress.
What's next
This guide covered the end-to-end workflow for finding growth areas and building evidence. For specific tasks within this workflow, see:
- Ask a Growth Question -- ask a specific question and get advice grounded in your actual standard
- Check Progress Toward Next Level -- see whether recent work shows visible movement toward the bar
- See What's Expected at Your Level -- understand the full expectations for your current role
- Data Model Reference -- how skills, levels, and behaviours are structured in the underlying model
- Authoring Agent-Aligned Engineering Standards -- how to define the standard that Guide and Landmark reason about