AI Transparency Policy

Last updated: October 16, 2025

Framework anchors: SDAIA AI Ethics (KSA), OECD AI Principles, UNESCO AI Ethics, NIST AI RMF 1.0, ISO/IEC 42001 & 23894

1. Purpose & Scope

This page sets out Altaius's commitments for the safe, ethical, and human‑centred use of AI across our leadership training platform, including simulations, the AI coaching assistant, analytics, and marketplace content. It complements our Privacy Notice and Cookie Policy and applies to our employees, contractors, partners, and content creators.

For enterprise programs, these commitments operate alongside customer-specific governance and controls.

2. Our Principles (aligned to SDAIA, OECD, UNESCO, NIST)

We commit to the following principles, adapted to the Saudi/GCC context and global best practice:

Principle What it means at Altaius
Human‑Centred & Beneficence Design for human learning and well‑being; augment-not replace-human judgment.
Fairness & Inclusion Detect and reduce bias; ensure equitable experiences and scoring across languages, genders, and cultural contexts.
Accountability & Oversight Clear lines of responsibility; human‑in‑the‑loop for meaningful outcomes; appeal and remediation pathways.
Privacy & Data Minimisation Collect only what we need for learning efficacy and safety; protect by design; honour user/client choices.
Transparency & Explainability Disclose where AI is used; provide accessible explanations of coaching recommendations and scores.
Safety, Robustness & Security Adversarial testing, abuse prevention, content moderation, and secure engineering across the lifecycle.
Sustainability Optimise compute usage and adopt efficient architectures; monitor and reduce environmental impact.

3. Governance & Roles

Altaius operates an AI Management System aligned with NCA ECC (Essential Cybersecurity Controls), PDPL, and NIST AI RMF functions (Govern, Map, Measure, Manage). Our AI Ethics Council, chaired by the CEO, includes leads for AI/ML, Learning Science, Product, Security, Data Protection (DPO), and a Shariah Advisor.

The Council approves high‑risk use cases, oversees risk assessments, and publishes an annual Responsible AI Report.

Key responsibilities:

  • Board/CEO: ultimate accountability for AI risk appetite, reporting, and resourcing
  • AI Ethics Council: policy ownership; approves launches and exceptions; receives red‑team and audit results
  • AI/ML Lead: model documentation, evaluation, monitoring; bias & drift remediation
  • Learning Science Lead: ensures learning efficacy; prevents harmful or manipulative mechanics
  • Security Lead: threat modelling, secure development, vendor risk, vulnerability and incident management
  • DPO/Privacy: DPIAs, data minimisation, cross‑border transfer compliance, subject rights processes
  • Shariah Advisor: assures monetisation, marketplace, and behavioural design align with Islamic finance and ethics

4. Lifecycle Controls (Build → Test → Deploy → Monitor)

We apply layered controls across the AI lifecycle, with stricter measures for higher‑risk features and public sector contexts.

Problem Framing: Define learner outcomes, risks, and safeguards. Run an Ethical Impact Assessment (EIA) alongside privacy DPIA.
Data Sourcing: Prioritise client‑provided data and synthetic/augmented data; exclude sensitive categories unless required with explicit approvals.
Design & Prototyping: Harm analysis; guided prompting and guardrails; content policies for user inputs; accessibility and cultural localisation checks.
Model Development: Model cards; evaluation suites for bias, toxicity, hallucination rates; multilingual testing (Arabic/English).
Pre‑Launch Review: Red‑team scenarios; performance & fairness thresholds; human‑in‑the‑loop plan; fallback and rollback procedures.
Deployment: Feature flags; staged rollout; cookie/consent gating where relevant; audit logs.
Monitoring: Drift detection; safety event tracking; user feedback loops; quarterly bias/quality audits with published deltas.
Incident Response: Playbooks for model or data incidents; pause/rollback criteria; user/admin notifications; root‑cause and corrective actions.

5. Learning & Gamification Ethics (Altaius‑specific)

Because we blend AI with game‑based learning, we apply additional safeguards to uphold learner dignity, motivation, and psychological safety.

  • No dark patterns: avoid deceptive timers, pay‑to‑win, or manipulative loops. No monetisation tied to learner stress or loss aversion.
  • Constructive difficulty: calibrate challenge to skill level; provide recoverable failure and reflective debriefs rather than punitive mechanics.
  • Cultural respect: localise narratives and feedback styles for GCC norms (e.g., privacy in corrective feedback, respectful tone).
  • Well‑being & workload: default session lengths and spacing encourage healthy practice; nudge for breaks; disable streak‑pressure for enterprise accounts if requested.
  • Score fairness: validate rubrics against role‑relevant behaviours; separate formative feedback from high‑stakes assessments; provide appeals.
  • Transparency to learners: clearly label AI‑generated tips, scoring criteria, and limitations; provide 'Why am I seeing this?' explanations.

6. Model Cards & Transparency

For every AI model we deploy, we publish Model Cards that disclose:

  • Model Purpose: What the model is designed to do (e.g., generate coaching feedback, score negotiation efficiency)
  • Training Data: General description of datasets used, including language coverage (AR/EN)
  • Performance Metrics: Accuracy, precision, recall, F1 scores where applicable
  • Known Limitations: Scenarios where the model may underperform or produce biased outputs
  • Parity Testing: Results from Arabic-English equivalence checks

Model cards are available to pilot participants and enterprise clients upon request.

7. Arabic-English Parity

We are committed to delivering equal quality across Arabic and English. Our parity checks include:

  • Bilingual Content Review: All scenarios, coaching prompts, and feedback are reviewed by native AR/EN speakers
  • Automated Parity Tests: Regular A/B tests comparing AI outputs across languages
  • User Feedback Loop: Participants can flag language-specific issues for review

If you encounter quality disparities between languages, please report them to ai-feedback@altaius.com

8. Measurable Commitments & KPIs

We track and report on key metrics to ensure our AI systems meet our ethical standards:

  • Quarterly bias audit across Arabic/English for scoring variance; target ≤ 5% variance unless role‑justified.
  • Hallucination rate on factual coaching tips kept below defined threshold; track and reduce over time.
  • Safety events (blocked harmful prompts) monitored with trend reductions quarter‑over‑quarter.
  • Appeal SLA: respond to learner appeals on scoring within 10 business days via admin/DPO workflow.
  • Annual Responsible AI Report summarising audits, incidents, improvements, and roadmap.

9. Human-in-the-Loop

For sensitive or high-stakes decisions, we maintain human oversight:

  • Compliance & Risk Assessments: Reviewed by qualified human experts before final reporting
  • Edge Cases: Unusual learner behaviors flagged for human coaching intervention
  • Appeals: All AI-generated scores can be appealed for human review (see below)

10. Appeal Path for AI Decisions

You have the right to challenge any AI-generated assessment. Our appeal process:

  1. Submit Appeal: Flag the decision via the platform or email ai-appeals@altaius.com
  2. Human Review: A qualified reviewer (not the original AI) re-evaluates your scenario performance
  3. Response: You receive a written explanation within 5 business days
  4. Correction: If the appeal is upheld, your scores/feedback are updated and you're notified

11. Bias Mitigation

We actively work to identify and reduce bias in our AI systems:

  • Diverse Training Data: Scenarios reflect Saudi/GCC cultural norms and business contexts
  • Fairness Audits: Regular testing for demographic, linguistic, and cultural bias
  • Inclusive Design: Input from diverse stakeholders (gender, age, nationality) during development

12. Special Populations & Accessibility

Altaius is for adult professional learners (18+). We aim to meet WCAG 2.2 AA where feasible, including keyboard navigation, captions/transcripts, colour‑contrast, and screen‑reader support. We provide alternative formats for debriefs and summaries to reduce cognitive load and enhance inclusion.

13. Environmental Responsibility

We monitor the energy footprint of training/inference, prefer efficient models for production, schedule batch jobs during lower‑carbon grid windows where available, and review compute intensity during model selection.

14. Third‑Party Models & Vendors

We assess vendors for security and ethics (including model risk, data usage terms, and subprocessing). For foundation and generative models, we review provider transparency, safety profiles, and licensing compliance. Where we fine‑tune or adapt models, we document datasets and ensure customer data is not used to train provider models unless expressly permitted.

15. Reporting Concerns & Appeals

Learners or admins can report issues (bias, unsafe content, or incorrect scoring) in‑product or by emailing ethics@altaius.com. We investigate, respond, and remediate. Appeals on key outcomes are reviewed by a human panel (AI Ethics Council delegates).

Contact:
General questions: ethics@altaius.com or privacy@altaius.com
Address: Altaius, Riyadh, Kingdom of Saudi Arabia

Cookie Preferences

We use cookies to improve your experience. Essential cookies are required for site functionality. Analytics cookies help us understand how you use our site (no cross-site tracking). You can choose which cookies to accept.

Learn more about cookies