AI Strategy KPI Baselines¶
Purpose¶
The AI Strategy commits Feoda to ambitious 18-month targets — including Level 4 (Native) AI maturity across all seven departments. Without baselines, those targets are unmeasurable.
This document establishes:
- The measurement methodology for each KPI in Section 11 of the AI Strategy.
- The baseline reading (the value as of the strategy approval quarter, 2026 Q2).
- The data source and owner for each measurement.
Baselines are captured once per KPI. Subsequent readings are recorded in the quarterly review (see AI Strategy Section 14).
Status¶
Draft — methodology defined for all KPIs; numerical baselines pending the first measurement cycle (target: 2026-05-31).
For each KPI marked Pending, the responsible owner is committed to capturing the baseline value before the date noted.
1. Measurement principles¶
- One source of truth per KPI. Each measurement has exactly one named system or named owner producing the number. No duplicate measurement.
- Define before measuring. A KPI is not "measured" until its definition, formula, data source, frequency, and owner are recorded in this document.
- Same instrument across the horizon. Once a KPI's measurement methodology is fixed, it does not change without a documented amendment. Methodology changes invalidate trend comparisons.
- Quarterly cadence. All KPIs are reported quarterly to the AI Strategy Steering Committee. High-risk KPIs (Section 4) are reviewed monthly.
- Audited annually. A sample of measurements is audited annually by the Head of Technology against the underlying data source.
2. Adoption KPIs¶
| ID | KPI | Definition | Data source | Owner | Cadence | Baseline (2026 Q2) | Notes |
|---|---|---|---|---|---|---|---|
| A-01 | Departments with ≥1 production AI workflow | Count of distinct departments (of 7) where at least one AI workflow is approved by the relevant Department Head and used in production for ≥30 consecutive days. | Department workflow registry (to be created under AP-04). | Head of Technology | Quarterly | Pending — target capture 2026-05-31 | Target: 7/7 by month 12. |
| A-02 | Employees with active AI agent access | % of employees in roles deemed "AI-relevant" (per the role classification published with AP-01) who have an active, identity-bound, named AI account on at least one approved tool, measured against the staff register. | Approved Provider admin consoles + HR staff register. | Head of Technology + HR | Quarterly | Pending — depends on AP-01 + AP-03 | Target: 100% of relevant roles by month 6. |
| A-03 | Out-of-policy AI usage incidents | Count of confirmed incidents in the period where an employee used an unapproved AI provider, anonymous account, or accessed AI with confidential data outside approved channels. | Audit log (AP-05) + HR/IT incident reports. | Head of Technology | Monthly | 0 confirmed (no audit log yet — see caveat below) | Target: zero by month 6. Caveat: baseline is "no detected incidents" because the audit log infrastructure is not yet built (AP-05). True baseline will be re-stated once AP-05 ships. |
| A-04 | Foundational AI literacy training completion | % of all current employees who have completed the 8-module foundational AI literacy curriculum (AP-02) and passed the knowledge check. | LMS / training register (to be selected under AP-02). | HR | Monthly during rollout, then quarterly | 0% — curriculum not yet built (AP-02) | Target: 100% by month 6. |
| A-05 | Role-specific advanced training completion | % of department members who have completed the role-specific advanced AI training defined for their department. Measured per department. | LMS / training register. | HR + Department Heads | Quarterly | Pending — curricula to be defined per department in Phase 2 | Target: 100% by month 12. |
| A-06 | Active weekly adoption | % of eligible-role employees with at least one logged interaction with an approved AI tool in the rolling 7-day window. | Approved Provider usage telemetry + audit log (AP-05). | Department Heads (consolidated by Head of Technology) | Quarterly (with monthly internal tracking) | Pending — depends on AP-03 + AP-05 | Target: ≥90% by month 12, sustained. |
| A-07 | AI competency integrated into hiring/onboarding/performance review | Boolean per department: has AI competency been added to (a) job descriptions, (b) onboarding curriculum, (c) performance review template? | HR documentation. | HR | Quarterly | 0/7 departments (pending Phase 4) | Target: 7/7 complete by month 18. |
3. Outcome KPIs¶
| ID | KPI | Definition | Data source | Owner | Cadence | Baseline (2026 Q2) | Notes |
|---|---|---|---|---|---|---|---|
| O-01 | Median story-to-deployment time (engineering) | Median elapsed wall-clock time from "story moved to In Progress" to "merged + deployed to production", computed over the rolling quarter. Excludes stories in won't-do or cancelled. |
GitHub + CI deployment timestamps. | Head of Technology | Quarterly | Pending — to be captured from Q1 2026 GitHub data by 2026-05-31 | Target: material reduction vs baseline. "Material" = ≥25% by month 12. |
| O-02 | Median client-implementation cycle time | Median elapsed wall-clock time from contract signed to client go-live. Computed across the rolling 12-month window of completed implementations. | Client implementation register (clients/). |
Head of Delivery | Quarterly | Pending — to be back-computed from Pymble, Saint Edwards, Al Faisal data by 2026-05-31 | Target: reduction vs baseline. Currently 3 data points — small-sample caveat applies. |
| O-03 | Median support resolution time | Median elapsed time from a support ticket being raised to it being marked Resolved, computed over the rolling quarter. | Support ticketing system (to be confirmed — current system in catalogue). | Head of Support | Quarterly | Pending — system identification + baseline capture by 2026-05-31 | Target: reduction vs baseline. |
| O-04 | Documentation compliance score | Composite score: (% of docs with complete frontmatter × 0.4) + (% of docs with all internal links resolving × 0.4) + (% of months with a CHANGELOG entry × 0.2). Computed by the AI-audit job once AP-04 ships. | Documentation repository + AI audit job (AP-04). | Head of Technology | Quarterly | Manual one-off baseline: ~85% (estimate) — to be replaced with automated reading once AP-04 ships. | Target: ≥98% by month 9. Methodology: reuse the audit pattern from the 2026-04-20 compliance audit. |
4. Risk KPIs (reviewed monthly)¶
| ID | KPI | Definition | Data source | Owner | Cadence | Baseline (2026 Q2) | Notes |
|---|---|---|---|---|---|---|---|
| R-01 | Confidentiality incidents | Count of confirmed incidents involving AI handling of Confidential or Restricted data outside approved channels (per AI Strategy data classification, Section 8). | Audit log (AP-05) + Security/Legal incident register. | Head of Technology + Legal | Monthly | 0 known | Target: zero across strategy horizon. Caveat: as with A-03, baseline reflects detection capability, not absence of incidents, until AP-05 ships. |
| R-02 | Audit log coverage | % of AI interactions on Restricted-tier data that are captured in the audit log, measured by sampled comparison against provider-side admin logs. | Audit log (AP-05) + provider admin consoles. | Head of Technology | Quarterly | 0% (audit log not yet built) | Target: 100% by month 6 (AP-05 deliverable). |
| R-03 | Quarterly access review completion | % of quarterly access reviews completed on time (where "on time" = within 14 days of quarter end). Each provider × each role group counts as one review. | Access review register (to be created under AP-01). | Head of Technology | Quarterly | N/A — first review due 2026-09-30 | Target: 100% on time, every quarter. |
5. Maturity KPIs¶
| ID | KPI | Definition | Data source | Owner | Cadence | Baseline (2026 Q2) | Notes |
|---|---|---|---|---|---|---|---|
| M-01 | Departments at Level 4 (Native) | Count of departments (of 7) assessed at Level 4 against the AI maturity model (AI Strategy Section 6) by the Steering Committee. | Quarterly self-assessment + Steering Committee ratification. | AI Strategy Steering Committee | Quarterly | 0/7 | Target: 7/7 by month 18. |
| M-02 | Departments at Level 3 or above | As above, threshold Level 3 or higher. | As above. | As above. | Quarterly | Pending — baseline assessment scheduled 2026-05-15 | Target: 7/7 by month 12. |
| M-03 | Cross-cutting AI workflows in production | Count of AI workflows that span ≥2 departments and are in production for ≥30 consecutive days. | Department workflow registry (AP-04). | Head of Technology | Quarterly | 0 | Target: ≥3 by month 12. |
| M-04 | Core processes redesigned AI-first per department | Count of department-owned processes that have been (a) explicitly redesigned with AI as a first-class component, (b) approved by the Department Head, (c) documented under company/processes/ or the relevant solution folder. |
Process registry (company/processes/ index). |
Department Heads | Quarterly | 0/7 departments with a redesigned process | Target: ≥1 per department by month 9; full redesign by month 18. |
6. Baseline-capture plan¶
The following items must be completed to convert all Pending rows above into hard numbers. Owner and target date are listed.
| Action | Owner | Target | Blocked by |
|---|---|---|---|
| Compute O-01 baseline from GitHub history (Q1 2026 data) | Head of Technology | 2026-05-31 | None |
| Compute O-02 baseline from Pymble/Saint Edwards/Al Faisal records | Head of Delivery | 2026-05-31 | Confirm dates in client overview.md files |
| Identify support ticketing system + extract Q1 2026 ticket data for O-03 | Head of Support | 2026-05-31 | None |
| Run a one-off documentation compliance audit for O-04 baseline (manual, mirroring 2026-04-20 audit) | Head of Technology | 2026-05-15 | None |
| Conduct first maturity self-assessment per department (M-01, M-02) | AI Strategy Steering Committee | 2026-05-15 | None |
| Publish role classification (which roles are "AI-relevant") for A-02, A-06 | Head of Technology + HR | 2026-05-31 | None |
| Restate A-03, R-01, R-02 baselines once AP-05 (audit logging) ships | Head of Technology | 2026-10-31 (month 6) | AP-05 |
| Restate O-04 with automated reading once AP-04 ships | Head of Technology | 2027-01-31 (month 9) | AP-04 |
7. Reading log¶
This table records each quarterly reading once captured. Baseline column above is never updated — only this log grows.
| Quarter | Date captured | Captured by | KPIs updated | Link to report |
|---|---|---|---|---|
| 2026 Q2 | Pending | Pending | Initial baselines per Section 6 plan | TBD |
8. Status log¶
| Date | Author | Change |
|---|---|---|
| 2026-04-22 | Head of Technology | Initial draft — KPI definitions, methodology, baseline-capture plan. Numerical baselines pending the first measurement cycle. |