Skip to content

Feoda Corporate AI Strategy

Version: 1.0 Effective Date: 2026-04-21 Horizon: 18 months (with quarterly reviews) Approved By: Head of Technology Status: Living document — subject to quarterly revision


Table of Contents

  1. Executive Summary
  2. Strategic Vision & Mission Alignment
  3. Guiding Principles
  4. Scope: Internal AI vs Product AI
  5. Strategic Objectives
  6. AI Maturity Model
  7. Governance Model
  8. Guardrails & Risk Framework
  9. Departmental Application
  10. Investment Framework & ROI Criteria
  11. Success Metrics & KPIs
  12. Strategic Roadmap
  13. Roles & Responsibilities
  14. Review & Evolution
  15. Appendices

1. Executive Summary

Feoda will operate as an AI-integrated company within 18 months. Rather than treating AI as a productivity add-on, Feoda will embed AI agents directly into the workflows, decisions, and deliverables of every internal department — from engineering and delivery through finance, HR, and legal.

This strategy commits Feoda to three outcomes:

  1. Operational acceleration — material reduction in time-to-deliver across all internal processes, achieved by AI participating in (not merely assisting) work.
  2. Governance and trust — every AI interaction within Feoda is identity-bound, scope-limited, audited, and reversible. No AI agent operates without traceability.
  3. Scalable adoption — a repeatable framework allows new departments, offices, and roles to onboard AI-integrated workflows without bespoke architecture.

The strategy applies to all Feoda offices (Australia HQ, UAE, India, with Singapore HQ planned). Any new office — including the planned Singapore HQ relocation — must produce a regional addendum addressing local data protection, employment, and regulatory requirements before AI workflows are deployed in that region.

This document is a living strategy. It will be reviewed quarterly and revised as Feoda's AI maturity advances and as the external AI landscape evolves.


2. Strategic Vision & Mission Alignment

2.1 Vision

Feoda will be an AI-native organisation — where every internal workflow assumes AI participation by default, every employee is augmented by purpose-built agents, and every operational decision is supported by traceable AI reasoning.

2.2 Alignment with Feoda's Mission

Feoda's mission is to deliver one unified financial information system to the education sector. Internal AI adoption directly supports this mission by:

Mission Element How Internal AI Strengthens It
Single source of truth for clients AI enforces consistency in our own documentation and delivery, eliminating internal information drift that ultimately reaches clients
Real-time, accurate data AI-assisted operations reduce manual lag in internal reporting, finance, and project tracking
Education-sector specialisation AI agents trained on Feoda's accumulated education-sector knowledge accelerate every new client engagement
Operational efficiency for clients A team that operates efficiently internally delivers efficiency externally

2.3 Strategic Posture

Feoda adopts an assertive but governed posture toward AI:

  • Assertive — AI is treated as core operational infrastructure, not optional tooling. Departments are expected to identify and adopt AI-integrated workflows, not to opt in.
  • Governed — Every AI deployment passes through this strategy's guardrails. No team may deploy AI workflows that contradict Sections 3 and 8.

3. Guiding Principles

These principles are non-negotiable and govern every AI initiative at Feoda. Conflicts between operational convenience and these principles are resolved in favour of the principles.

Principle 1 — Identity Before Capability

No AI agent operates anonymously. Every AI interaction is bound to an authenticated user identity. The agent's capabilities, data access, and permitted actions are derived from that identity's role.

Principle 2 — Scope Before Power

Every AI agent operates within an explicit scope. Agents are not given general-purpose access to company data; they are given the minimum data and tool access required to perform their function.

Principle 3 — Human Authority Over External Impact

AI may draft, propose, analyse, and recommend. AI may not autonomously execute actions that have external impact (client communications, financial transactions, contractual commitments, public statements) without explicit human authorisation.

Principle 4 — Traceability of Every Action

Every AI interaction with company data or systems produces an audit record: who initiated it, what was requested, what the agent did, what data was accessed, what output was produced. Logs are retained per the data classification policy.

Principle 5 — Reversibility by Default

AI agents may take actions that are easily reversible (drafting a document, suggesting a code change, populating a form). Irreversible actions (deletion, sending, signing, paying) require human authorisation.

Principle 6 — Confidentiality is Inviolable

Client data, employee data, financial data, and Feoda's proprietary intellectual property are never sent to AI providers that retain inputs, train on inputs, or operate outside contractually-vetted enterprise agreements.

Principle 7 — Disclosure and Trust

When AI materially contributes to a deliverable that reaches a client, partner, regulator, or the public, that contribution is disclosed. Feoda does not pass off AI-generated work as exclusively human, and does not pass off human work as AI-generated.

Principle 8 — Continuous Learning

Feoda treats AI capability as a moving target. The strategy, the approved provider list, the agent architectures, and the workflows are reviewed continuously and updated when better options emerge.


4. Scope: Internal AI vs Product AI

This strategy governs Internal AI only. The two tracks must not be conflated.

Track Audience Governed By
Internal AI (this strategy) Feoda employees performing Feoda work This document
Product AI (out of scope here) Feoda clients using ARM, ERP, EPM Product roadmaps, ADRs in tech/decisions/, individual product specifications

4.1 In Scope

  • AI agents used by Feoda staff (developers, consultants, sales, support, finance, HR, legal, leadership)
  • AI integrated into internal workflows (sales pipeline, project delivery, support ticketing, hiring, financial close, contract review)
  • AI tooling for internal documentation, knowledge management, and decision support
  • Client-facing chat assistants operated by Feoda Support — conversational interfaces (web chat, WhatsApp, Telegram, email triage) through which clients reach Feoda for support, when those interfaces are owned and operated by the Feoda Support function. These are governed as an extension of the Support workflow (see Section 9.4) because the AI represents Feoda directly to clients.
  • Identity, access, and audit infrastructure supporting internal AI
  • Internal data flowing into AI providers

4.2 Out of Scope

  • AI features embedded in Feoda's products themselves (ARM AI features, NetSuite Next integrations exposed inside client deployments, agents running inside a client's NetSuite tenant)
  • AI used by clients within their own NetSuite environments
  • Generic NetSuite AI features that Feoda does not control or customise

4.3 Boundary Cases

When a workflow spans both tracks (e.g., an internal AI agent helps a Feoda consultant configure a client's billing engine), the workflow is governed by both this strategy and the relevant product governance. The stricter requirement applies.


5. Strategic Objectives

Feoda commits to the following objectives over the 18-month strategy horizon. Each objective is measurable and reviewed quarterly.

Objective 1 — Universal Identity-Bound AI Access

By month 6, every Feoda employee accesses internal AI exclusively through identity-authenticated agents. Anonymous, shared-credential, or out-of-policy AI usage is eliminated.

Objective 2 — AI-Native Departmental Operation

By month 18, every department named in this strategy operates at Level 4 — Native maturity (Section 6). Core processes are redesigned to assume AI participation by default, and human effort across the organisation is refocused on judgment, relationships, and irreducibly creative tasks.

Objective 3 — Documentation as Living Knowledge

By month 9, all internal documentation is governed by an AI-audited workflow. No documentation change is merged without automated compliance verification and owner approval.

Objective 4 — AI-Native Engineering

By month 12, the engineering function operates AI-natively: development workflows are designed AI-first, AI agents draft, review, test, and document code as a routine participant, and human engineering effort focuses on architecture, judgment, and validation.

Objective 5 — Repeatable AI-Native Onboarding

By month 15, a documented framework allows any new department or office to onboard to AI-native operation under this strategy in under 30 days.

Objective 6 — Zero Confidentiality Incidents

Across the full strategy horizon, zero incidents of client, employee, financial, or proprietary data being exposed to non-approved AI providers.

Objective 7 — AI Capability and Adoption

By month 6, every Feoda employee completes foundational AI literacy training (principles, data classification, approved tools, prompt fundamentals, validation discipline, when not to use AI). By month 12, every department has received role-specific advanced training (workflow design, agent supervision, prompt patterns, output validation). By month 18, AI competency is integrated into hiring criteria, onboarding, and annual performance review. Active adoption — measured as weekly use of approved AI tools tied to defined workflows — reaches ≥90% of eligible roles by month 12 and is sustained thereafter.

This objective is the human enabler for Objectives 2, 3, 4, and 5: Level 4 (Native) operation is impossible without organisation-wide capability uplift.


6. AI Maturity Model

Feoda assesses its AI maturity against a five-level model. Departments may be at different levels; the strategy targets specific levels by specific dates.

Level Description Hallmarks
0 — Ad Hoc Individual employees use public AI tools without governance No identity binding, no audit, confidentiality risk
1 — Assisted AI is a sidebar tool employees consult; no integration into workflows "Open ChatGPT, ask, copy answer back"
2 — Embedded AI is integrated into specific tools and workflows; identity-bound AI in IDE, AI in docs platform, AI in ticketing
3 — Integrated AI is a participant in workflows; agents act with bounded autonomy Agents draft PRs, triage tickets, populate forms, propose decisions
4 — Native The organisation's processes assume AI participation; human work focuses on judgment, relationships, and irreducibly creative tasks Workflows are designed AI-first

Current State (as of strategy approval)

  • Engineering / Documentation: Level 2 (AI embedded in IDE and docs platform)
  • All other departments: Level 0–1 (no formal governance, mixed tool usage)

Target State by Month 18

Feoda targets Level 4 (Native) across every department within the 18-month horizon. Every department's processes will be redesigned to assume AI participation by default. Human effort across the organisation will refocus on judgment, relationships, client trust, and irreducibly creative tasks.

Department Month 18 Target
Technology Level 4 — Native
Implementation / Delivery Level 4 — Native
Support (incl. client-facing chat) Level 4 — Native
Sales & Marketing Level 4 — Native
Finance Level 4 — Native
Human Resources Level 4 — Native
Legal Level 4 — Native, within a conservative agent scope

This is an explicit, ambitious commitment. Feoda will not treat Level 4 as a distant aspiration — it is the operating target for this strategy.

What "Level 4 with governance" Means at Feoda

Reaching Level 4 in 18 months is feasible only because Feoda is small, vertically focused, and not yet locked into legacy processes. To make the target real without compromising trust, the following non-negotiable conditions apply at every level — including Level 4:

  1. The Guiding Principles (Section 3) and Guardrails (Section 8) bind Level 4 just as they bind Level 0. Native operation does not relax identity binding, scope limits, audit, reversibility, or confidentiality. Level 4 means processes are AI-first; it does not mean AI is unsupervised.
  2. Human authority over external impact (Principle 3) is permanent. Even at Level 4, humans approve client communications, financial transactions, contractual commitments, employee record changes, and irreversible production actions. Level 4 redesigns the work; it does not remove the accountability.
  3. Process redesign is mandatory, not optional. Each department must redesign its core processes during this strategy horizon. "Adding AI to the existing process" is Level 2/3, not Level 4. Department heads own this redesign and document it in company/processes/.
  4. Role redefinition follows process redesign. Job descriptions, evaluation criteria, and team structures will be updated to reflect AI-native operation. HR leads this work in coordination with each department head.
  5. Legal operates at Level 4 within a deliberately narrower agent scope. Legal will operate AI-natively for research, drafting, and clause analysis. AI-issued legal opinions, contract execution, and regulatory filings remain prohibited at every level.

Implication

This is a significant organisational commitment. It compresses what most organisations would phase over 3–5 years into 18 months. It will require sustained investment, decisive leadership, willingness to retire legacy processes, and active management of employee transition. The strategy roadmap (Section 12) is structured to deliver it.


7. Governance Model

7.1 Decision Authority

Decision Type Authority
Strategy approval and revision Head of Technology, with executive review
Approved AI provider list Head of Technology
New department AI workflow approval Head of Technology, with department head sign-off
Workflow-level changes within an approved scope Department head
Individual agent configuration within a workflow Workflow owner
Emergency suspension of any AI workflow Head of Technology, any department head, or any employee reporting a concern

7.2 AI Steering Function

A lightweight AI Steering function — initially the Head of Technology, later expandable — is responsible for:

  • Maintaining this strategy and the approved provider list
  • Reviewing and approving new department AI workflow proposals
  • Investigating incidents and confidentiality breaches
  • Communicating quarterly updates to the company
  • Coordinating with Legal on regulatory developments

This function is not a committee that gates day-to-day work. Department heads operate autonomously within their approved scope and consult the Steering function only for new workflows or boundary cases.

7.3 Approval Thresholds by Risk Tier

AI workflows are classified into three risk tiers. Higher tiers require more rigorous approval.

Tier Examples Approval Required
Low risk Internal drafting, code suggestions, internal Q&A, document summarisation, meeting notes Department head
Medium risk Generating client-facing drafts, populating CRM/ticketing systems, code that ships to clients Department head + workflow owner sign-off; AI output reviewed by human before external use
High risk Anything touching financial transactions, employee records, client PII, contracts, regulatory filings, or autonomous external action Head of Technology approval + documented in this strategy's Action Plans

8. Guardrails & Risk Framework

These guardrails are the operational expression of the Guiding Principles in Section 3. They are binding on every AI workflow at Feoda.

8.1 Data Classification & Handling

All Feoda data is classified into four tiers. The classification dictates which AI providers may receive that data.

Tier Examples Permitted AI Handling
Public Marketing material, published case studies, public website content Any approved provider
Internal Internal documentation, process descriptions, non-sensitive operational data Approved providers under enterprise/zero-retention agreements
Confidential Client implementation details (anonymised), internal financials, employee operational data Approved enterprise providers only; named-individual access only
Restricted Client PII (student/parent records), employee PII (salaries, reviews, IDs), financial transactions, contracts, IP/proprietary code Enterprise providers under explicit DPA; restricted to specific named workflows; full audit logging

8.2 Approved & Prohibited AI Providers

Approved (initial list, reviewed quarterly)

  • Anthropic Claude (API and enterprise plans)
  • Groq (free tier permitted for retrieval/non-confidential use only)
  • OpenAI (Enterprise tier only — not consumer ChatGPT)
  • Microsoft Copilot for Business (within M365 enterprise agreement)
  • GitHub Copilot for Business

Prohibited

  • Any free/consumer AI tool for any data above Tier 1 (Public)
  • Any provider without zero-retention or enterprise data-handling guarantees
  • Any provider not on the approved list, regardless of perceived suitability

Adding a provider to the approved list requires Steering function approval and a documented data-handling assessment.

8.3 Human-in-the-Loop Requirements

The following actions require human review and approval before execution:

  • Any communication sent to a client, prospect, partner, regulator, or the public
  • Any change to financial records, invoices, payments, or accounting entries
  • Any change to employee records (HR, payroll, performance)
  • Any code merged to a production-deployed branch
  • Any contractual commitment or legal document
  • Any deletion of operational records

AI may prepare, draft, propose, or queue any of the above. Execution is human.

8.4 Audit & Traceability

Every AI interaction with Feoda data must produce an audit record containing:

  • Authenticated user identity
  • Timestamp
  • Agent invoked
  • Inputs (or hash of inputs for sensitive data)
  • Data sources accessed
  • Outputs (or hash of outputs for sensitive data)
  • Actions taken (if any)

Audit records are retained for a minimum of 24 months for Internal-tier interactions and 7 years for Confidential and Restricted-tier interactions, in line with financial and employment record retention requirements.

8.5 Confidentiality & IP Protection

  • Client data isolation — AI agents working on behalf of one Feoda client never have visibility into another client's data, even when the same Feoda employee operates both engagements.
  • Cross-client analysis is permitted only when data is aggregated and anonymised, and only with explicit Steering function approval.
  • Feoda's proprietary code and methodology (configurator logic, custom scripts, billing patterns, client-specific solutions) are processed only by enterprise providers with explicit no-training contractual terms.
  • Client trade secrets disclosed to Feoda during engagement (e.g., school enrollment strategies, financial positions) are treated as Restricted data.

8.6 Regional Compliance

Region Primary Regulation Practical Requirement
Australia (HQ) Privacy Act 1988 / Australian Privacy Principles Notifiable data breaches scheme; APP compliance
UAE PDPL (Personal Data Protection Law) — Federal Decree-Law No. 45 of 2021 Cross-border transfer restrictions; explicit consent for processing
India DPDP Act 2023 (Digital Personal Data Protection Act) Consent management; data fiduciary obligations
Singapore (planned HQ) PDPA (Personal Data Protection Act) Consent for personal data processing; appointed Data Protection Officer. Regional addendum required before HQ relocation.

Any new office requires a regional addendum to this strategy addressing local data protection, employment law, and sector-specific (education) regulations before AI workflows are deployed in that region.

8.7 Education Sector Sensitivity

Feoda's clients are educational institutions. Data relating to minors (students) receives elevated protection:

  • Student PII is always Restricted-tier
  • AI workflows touching student data require explicit Head of Technology approval
  • Client communications about student matters always pass through human review
  • Student data is never used in AI provider testing, evaluation, or demonstration contexts

9. Departmental Application

This section states, at strategy level, what AI does and does not do in each department. Department-specific policies and procedures derive from these statements and live in company/policies/ and company/processes/.

9.1 Technology (Development & Architecture)

Scope of AI use: Code generation, code review, test generation, documentation, architecture exploration, dependency analysis, refactoring assistance, debugging support, sprint planning aid.

Boundaries: Code touching client production systems requires human review before merge. AI does not autonomously deploy. AI does not have direct production database access. Architecture decisions are AI-supported but human-owned and recorded as ADRs.

Target maturity (month 18): Level 4 — Native. Engineering processes are redesigned AI-first: agents routinely draft pull requests, generate tests from acceptance criteria, propose refactors, maintain documentation in lockstep with code, and participate in architecture exploration. Human engineering effort focuses on architecture, judgment, validation, and the irreducibly creative work AI cannot perform reliably.

9.2 Implementation / Delivery

Scope of AI use: Client requirements drafting, solution design exploration, configuration validation, test case generation, training material creation, status reporting, meeting extraction.

Boundaries: Final client-facing deliverables require human consultant review. AI does not directly modify client NetSuite environments without human execution. Client-specific business logic is captured by humans, validated by AI, never invented by AI.

Target maturity (month 18): Level 4 — Native. Delivery processes are redesigned AI-first across the implementation lifecycle (requirements, design, configuration, testing, training, hand-off). Consultants focus on client relationship, judgment on business fit, and validation of AI-prepared deliverables.

9.3 Sales & Marketing

Scope of AI use: Lead research, proposal drafting, sales material customisation, content creation for marketing campaigns, CRM data enrichment, market analysis, RFP response drafting.

Boundaries: All client-facing communications require human review. Pricing and contractual terms are never set by AI. Competitor intelligence is gathered only from public sources. Sales forecasts are AI-supported but human-owned.

Target maturity (month 18): Level 4 — Native. Sales and marketing processes are redesigned AI-first: lead qualification, account research, proposal generation, content production, and pipeline analysis run through AI agents by default. Sales staff focus on relationships, qualification judgment, and closing.

9.4 Support

Scope of AI use: Ticket triage, known-issue lookup, draft response generation, root-cause analysis from logs, escalation suggestions, support documentation maintenance, trend analysis, client-facing chat assistants on web, WhatsApp, Telegram, and email channels operated by Feoda Support.

Boundaries: Direct client responses always pass through human review for Restricted-tier or contractual matters. Client-facing chat assistants may answer factual support questions autonomously (FAQ, status, documentation lookup, basic troubleshooting) but must hand off to a human for: account-specific changes, financial questions, contractual matters, complaints, anything touching student data, and any topic the assistant is not explicitly scoped to handle. AI does not access client production environments. Resolution actions on client systems are executed by humans. Recurring issues identified by AI feed into product backlog through humans. Every client-facing AI interaction is logged, retained, and discloses that the client is interacting with an AI assistant (Principle 7).

Target maturity (month 18): Level 4 — Native. The Support function is redesigned AI-first across all channels (web chat, WhatsApp, Telegram, email, ticketing). AI handles autonomous resolution within scope; humans handle escalations, relationships, and judgment-based responses.

9.5 Finance

Scope of AI use: Financial analysis, expense categorisation suggestions, variance analysis, forecast modelling support, audit preparation assistance, tax research, vendor invoice processing assistance, financial documentation drafting.

Boundaries: No autonomous transactions. No journal entries posted by AI. Approval of payments, invoices, and entries remains with authorised humans. Financial close, reporting to leadership, and statutory filings are human-owned. Client financial data is Restricted-tier.

Target maturity (month 18): Level 4 — Native. Finance processes are redesigned AI-first for analysis, categorisation, reconciliation preparation, forecasting, and reporting. Posting, approval, and statutory filing remain human actions; the preparation and analysis around them is AI-native.

9.6 Human Resources

Scope of AI use: Job description drafting, candidate screening assistance (with human verification), onboarding material generation, policy drafting, training content creation, HR documentation, internal communication drafting.

Boundaries: No autonomous hiring decisions. No autonomous performance evaluations. Compensation decisions are entirely human. Employee personal data is Restricted-tier. AI does not access employee records without explicit, logged authorisation per request.

Target maturity (month 18): Level 4 — Native. HR processes are redesigned AI-first for sourcing, screening preparation, onboarding, training, policy maintenance, and internal communication. Hiring, evaluation, and compensation decisions remain entirely human.

Scope of AI use: Contract drafting assistance, clause comparison, regulatory research, policy drafting, risk identification in proposed agreements, legal documentation maintenance.

Boundaries: No legal opinions issued by AI. No contracts executed by AI. No regulatory filings by AI. All AI legal output reviewed by qualified human before any external use. Privileged communications are never processed by AI providers without explicit assessment.

Target maturity (month 18): Level 4 — Native, within a deliberately narrower agent scope. Legal research, drafting, clause analysis, and policy maintenance are AI-native. Issuing legal opinions, executing contracts, and submitting regulatory filings remain prohibited at every level.


10. Investment Framework & ROI Criteria

AI investments at Feoda are evaluated against a consistent framework. Departments proposing new AI workflows submit proposals scored against the following.

10.1 Investment Categories

Category Description Approval Path
Foundational Identity, access, audit, governance infrastructure Strategic — funded as core operational investment
Departmental AI workflows specific to one department's productivity Department head budget + Steering review for medium/high-risk tiers
Cross-cutting Workflows spanning multiple departments Steering function approval, allocated to most-impacted department
Experimental Proofs of concept, capability evaluation Capped budget per quarter, Steering function awareness

10.2 ROI Criteria

Each AI workflow proposal addresses:

  1. Time saved — measured in hours per week or per cycle
  2. Quality improvement — measured by error reduction, consistency improvement, or scope expansion
  3. Capacity unlocked — work the team can now do that previously was infeasible
  4. Risk reduction — measured by reduction in compliance, security, or operational risk
  5. Cost — full cost including provider fees, integration effort, training, ongoing maintenance
  6. Risk introduced — what could go wrong; mitigations in place

A workflow may proceed if it positively scores against at least one of criteria 1–4 and the cost-to-benefit ratio is acceptable to the approving authority.

10.3 Stop Criteria

Any AI workflow may be paused or terminated if:

  • It produces a confidentiality, compliance, or safety incident
  • Its measured benefit fails to materialise after a reasonable evaluation period
  • A better alternative becomes available
  • The provider's terms or capabilities change in ways that violate this strategy

11. Success Metrics & KPIs

Feoda measures the success of this strategy against the following KPIs, reported quarterly.

11.1 Adoption KPIs

  • Departments with at least one production AI workflow (target: 7 of 7 by month 12)
  • Employees with active AI agent access (target: 100% of relevant roles by month 6)
  • Anonymous / out-of-policy AI usage incidents (target: zero by month 6)
  • Foundational AI literacy training completion (target: 100% of employees by month 6)
  • Role-specific advanced training completion (target: 100% of department members by month 12)
  • Active weekly adoption of approved AI tools (target: ≥90% of eligible roles by month 12, sustained thereafter)
  • AI competency integrated into hiring, onboarding, and performance review (target: complete by month 18)

11.2 Outcome KPIs

  • Median story-to-deployment time (engineering, target: material reduction vs baseline)
  • Median client-implementation cycle time (delivery, target: reduction vs baseline)
  • Median support resolution time (target: reduction vs baseline)
  • Documentation compliance score (frontmatter completeness, link validity, changelog completeness — target: ≥98% by month 9)

11.3 Risk KPIs

  • Confidentiality incidents (target: zero across strategy horizon)
  • Audit log coverage (target: 100% of AI interactions on Restricted-tier data)
  • Quarterly access review completion (target: 100% on time)

11.4 Maturity KPIs

  • Departments at Level 4 (Native) (target: 7 of 7 by month 18 — see Section 6)
  • Departments at Level 3 or above (target: 7 of 7 by month 12)
  • Cross-cutting AI workflows in production (target: at least 3 by month 12)
  • Core processes redesigned AI-first per department (target: at least one redesigned process per department by month 9; full redesign by month 18)

Baselines for time-based metrics are captured in the first quarter following strategy approval.


12. Strategic Roadmap

The strategy is executed in four phases over 18 months. Each phase produces specific deliverables and exit criteria. Phase boundaries are reviewed at quarterly checkpoints.

Phase 1 — Foundation (Months 1–3)

Goal: Establish the identity, access, governance, and infrastructure that every subsequent phase depends on.

Deliverables: - Approved provider list ratified and communicated - Identity and role architecture defined and implemented (extends ROADMAP Phase 3) - Audit logging infrastructure in place - Documentation governance workflow live (AI-audited PRs) - This strategy's Action Plans folder populated with the first set of action plans - Department-level policies drafted in company/policies/ - Foundational AI literacy training designed and delivered to 100% of employees (principles, data classification, approved tools, prompt fundamentals, validation discipline, when not to use AI) — completion mandatory before expanded access is granted

Exit criteria: All employees authenticate to internal AI through identity-bound agents. All AI interactions produce audit records. Documentation governance is enforced automatically. Foundational training is complete across the organisation.

Phase 2 — Engineering & Delivery to Native (Months 4–9)

Goal: Bring Tech and Delivery to Level 4 — Native. These departments lead because they have the highest current maturity and the clearest cycle-time leverage.

Deliverables: - Developer agent in production with PR drafting, code review, test generation, and convention enforcement - Implementation agent in production for client requirements, solution design support, and configuration validation - Sprint workflow integration with Azure DevOps - Meeting-extraction workflow fully automated with AI routing knowledge to correct folders - Engineering core processes redesigned AI-first (sprint planning, story refinement, PR lifecycle, test strategy, release process) - Delivery core processes redesigned AI-first (requirements capture, solution design, configuration, testing, training, hand-off) - Engineering and Delivery role descriptions updated to reflect AI-native operation - Measured baselines for engineering and delivery cycle times - Role-specific advanced training delivered to Engineering and Delivery (workflow design, agent supervision, prompt patterns, validation discipline, redesigned-process operation) - Train-the-trainer programme established: each department nominates internal AI champions trained to lead capability uplift in their function

Exit criteria: Engineering and Delivery operate at Level 4. Their redesigned processes are documented in company/processes/. Cycle-time KPIs show step-change improvement vs baseline. Department AI champions are in place across all seven departments.

Phase 3 — Operational Departments to Native (Months 7–15, overlapping)

Goal: Bring Sales & Marketing, Support, Finance, HR, and Legal to Level 4 — Native. Each department's core processes are redesigned AI-first within its scope and boundaries (Section 9).

Deliverables: - Department-specific agents and workflows for each named department - Department-specific policies finalised in company/policies/ - Role-specific advanced training delivered to Sales & Marketing, Support, Finance, HR, and Legal as each onboards (workflow design, agent supervision, validation discipline, sector-specific patterns) - Cross-departmental community of practice established: monthly internal forum, shared prompt and pattern library, recurring case-study reviews - Each department's core processes redesigned AI-first and documented in company/processes/ - Updated role descriptions and team structures per department - Sales, Support, Finance, HR, Legal each have at least three production AI workflows covering their core operations - Client-facing chat assistant in production for Support

Exit criteria: All five operational departments operate at Level 4 within their defined scope. Risk KPIs remain at target. Zero confidentiality incidents. Active weekly adoption ≥90% of eligible roles. Community of practice is self-sustaining.

Phase 4 — Consolidate, Scale & Refine (Months 13–18)

Goal: Consolidate Level 4 operation across the organisation, codify the framework so additional departments and offices onboard rapidly, and prepare the next strategy version.

Deliverables: - All seven departments verified at Level 4 against documented criteria - Department onboarding playbook (the framework referenced in Section 5, Objective 5) - New office onboarding template (regional addenda) - Quarterly retrospective process for AI workflow effectiveness - Full process library in company/processes/ reflecting AI-native operation - AI competency integrated into hiring criteria, onboarding curriculum, and annual performance review for every role - Continuous-learning programme replaces one-off training: ongoing curriculum, capability-refresh cycles, internal certification path, and a permanent training budget line - Strategy v2.0 drafted based on 18 months of learning

Exit criteria: All seven departments operating at Level 4. Onboarding a new department takes under 30 days. AI capability is embedded in the people lifecycle (hire, onboard, develop, review). Strategy v2.0 ready for review.


13. Roles & Responsibilities

Role Responsibility
Head of Technology Owns this strategy. Chairs Steering function. Approves provider list. Approves high-risk workflows. Final authority on AI-related decisions.
Department Heads Approve and own AI workflows within their department. Ensure their team operates within strategy guardrails. Report KPIs quarterly.
Workflow Owners Configure, monitor, and maintain individual AI workflows. Respond to incidents within their workflow.
All Employees Use AI only through approved, identity-bound agents. Report concerns or incidents immediately. Comply with department-specific policies.
Legal (when established as separate function) Regulatory monitoring across all regions. Contract review for new providers. Privilege protection.
Finance Audit log retention compliance. AI provider cost tracking. ROI reporting.

A formal RACI matrix per department workflow is maintained in company/strategy/raci/ (created as needed).


14. Review & Evolution

14.1 Review Cadence

Review Frequency Output
Quarterly strategy review Every 3 months Updated KPI report; strategy revisions if needed
Annual major revision Every 12 months Strategy version increment (e.g., 1.0 → 2.0)
Triggered review On significant external change (regulatory, provider, capability) or incident Targeted update to affected sections

14.2 Change Process

  1. Proposed changes submitted to the Head of Technology
  2. Impact assessment against Sections 3 (Principles) and 8 (Guardrails)
  3. Stakeholder consultation across affected departments
  4. Approval by Head of Technology
  5. Strategy updated; CHANGELOG entry; company-wide communication
  6. Derived policies and procedures updated to align

14.3 Sunset & Replacement

This strategy will be fully superseded when:

  • Feoda's organisational structure changes materially
  • The AI landscape changes such that the strategy's framing is no longer adequate
  • A new strategy version (e.g., 2.0) is approved that explicitly replaces it

In the interim, this is the authoritative strategy.


15. Appendices

15.1 Glossary

Term Definition
AI Agent A configured AI capability with defined instructions, tool access, and data scope, invoked by an authenticated user
Identity-Bound Operating only when associated with a specific authenticated user
Workflow A repeatable sequence of work steps in which AI is a participant
Steering Function The governance body responsible for this strategy and AI decisions at Feoda
Restricted-Tier Data The most sensitive Feoda or client data — see Section 8.1
Workflow Owner The individual accountable for the configuration, performance, and compliance of a specific AI workflow
Regional Addendum A region-specific supplement to this strategy addressing local regulatory and operational requirements

15.2 Out of Scope (Explicit)

The following are explicitly not governed by this strategy:

  • AI features embedded in Feoda's products themselves — i.e., AI that ships inside ARM, runs inside a client's NetSuite tenant, or is part of a product deliverable (governed by product roadmaps and ADRs). Note: client-facing chat assistants operated by Feoda Support are in scope — see Section 4.1.
  • Third-party AI features clients may use within their own NetSuite environments
  • Personal use of AI by employees outside of Feoda work (subject to general acceptable-use policy, not this strategy)
  • PROJECT_INSTRUCTIONS.md — Documentation governance (will be revised under Phase 1)
  • ROADMAP.md — Documentation platform roadmap (Phase 3 of which is foundational to this strategy)
  • About Feoda — Company overview and structure
  • company/policies/ — Department-specific policies derived from this strategy (to be populated in Phase 1)
  • company/processes/ — Department-specific processes derived from this strategy (to be populated through Phases 2–4)
  • company/strategy/action-plans/ — Time-bound execution plans (to be created in Phase 1)

15.4 Document Control

Field Value
Version 1.0
Effective date 2026-04-21
Review cycle Quarterly
Next review 2026-07-21
Owner Head of Technology
Approvers Head of Technology
Distribution All Feoda employees; visible on internal documentation site

This is a living strategy. It will evolve as Feoda's AI maturity advances, as new offices and departments are onboarded, and as the external AI landscape changes. Every employee is responsible for operating within its principles and for raising concerns when they identify gaps.