05Trust & legal

AI use & data policy

Last updated: 2026-04-25 · Pre-launch

How Jubi uses AI models, what data flows through them, the lines we won't cross, and the responsibilities that stay with you as the customer. Applies to both Mode 1 (your AI, gated by Guardian) and Mode 2 (agents built on Jubi).

Informational, not contractual. Specifics evolve as model contracts, regional inference options, and regulation change. Binding obligations arise only from a signed engagement (Master Service Agreement, Data Processing Agreement, and any deployment-specific addenda). Where this page differs from the executed engagement, the engagement controls. For the current version of any engagement document, email privacy@jubi.my.

1. Our AI governance principles

  1. Grounded over generative. When an answer can be grounded in the customer's data or Atlas, we ground it. When it can't, the system is designed to say so rather than guess.
  2. Permission-first. AI access is bound to the requesting user's identity and the permissions Atlas/Guardian apply. The AI is not a privilege escalation.
  3. Auditable end to end. Every AI exchange is logged: prompt, policy decision, data accessed, response. Replayable.
  4. Customer data is not training data. We do not train or fine-tune our own models on customer data, and we use provider endpoints configured to disable training and retention beyond inference.
  5. Human in the loop. Jubi is decision-support tooling. Material decisions about people, money, or compliance should be reviewed by a human before action.

2. Models we use

Jubi runs large language models from third-party providers. The current default providers are Anthropic and OpenAI. We may add or substitute providers as the model landscape evolves; material changes that affect customer behaviour are surfaced to customer admins.

Routing between providers is decided by capability, cost, and policy at the platform level, not by user choice. If your engagement requires a specific provider be excluded, raise it during scoping and we will document the exclusion in the engagement.

3. Customer data and model training

Customer responsibility. If your users paste customer data, employee data, or other sensitive content into a third-party AI tool that is not routed through Jubi (Mode 1 BYOAI is gated; consumer ChatGPT outside Guardian is not), the protections above do not apply to that content. Configure your network controls so that AI traffic flows through Jubi.

4. Inference residency and data transfers

5. What goes to the model

What does not go to the model:

6. Sensitive use cases (we won't sell or support)

Jubi will not knowingly support, and the AUP prohibits, deployment of the platform for the following:

This list is not exhaustive. We may decline or terminate engagement for any deployment we consider to fall within prohibited or unduly high-risk categories under the EU AI Act, our values, or the AUP.

7. EU AI Act framing

In its default configuration, Jubi is general decision-support tooling for enterprise data analysis. We track the EU AI Act and Jubi is positioned to support the obligations that fall on AI providers and deployers in our value chain.

Whether a particular customer use case constitutes a high-risk AI system under Annex III of the EU AI Act, or otherwise falls within a regulated category, depends on the use case (recruitment, credit scoring, education access, essential public services, law enforcement, etc.) and on how the customer deploys Jubi. The customer is responsible for assessing applicability and meeting the obligations that fall on it as the deployer.

What Jubi provides to support customer compliance:

8. GDPR Article 22 (automated decisions)

Jubi does not, on its own, make decisions producing legal or similarly significant effects on individuals. Jubi is decision-support: it surfaces information for a human user.

If a customer chooses to use Jubi as part of a workflow that produces solely automated decisions with legal or significant effect on natural persons, the customer is responsible for meeting its Article 22 GDPR obligations, including providing the data subject's right to obtain human review, to express their point of view, and to contest the decision. Customer engagements involving such workflows require additional scoping; we will not silently ship a deployment that crosses this line.

9. Accuracy, hallucinations, and grounding

Large language models can produce plausible but incorrect content. Jubi's design — Atlas-grounded answers, output validation, citation requirements — reduces that risk for queries about your data. It does not eliminate it.

Verify before you act. AI answers — even grounded ones — may be inaccurate, incomplete, or out of date. Customers must verify outputs before relying on them for decisions about money, people, regulatory filings, or anything else where being wrong has cost. Jubi does not warrant the accuracy of AI-generated content.

10. Bias, fairness, and explainability

We work to reduce bias in two places: in how we use models (we don't put protected-class data into prompts unless the customer explicitly does so as part of an analysis) and in how we present results (citations, source attribution, ability to drill into the data behind any answer).

We do not warrant that AI outputs are unbiased or fair. Bias detection and correction in the customer's source data and Atlas definitions is the customer's responsibility. We will engage in good faith on bias issues raised under an active engagement.

11. Model evaluation and red teaming

We periodically evaluate the models and the platform together against:

Findings inform Guardian's input/output validation. Test cadence and results are not published in detail to avoid arming attackers; summary results are available to customers under NDA.

12. Customer controls

13. Prompt-injection and abuse defence

Guardian inspects inputs for known prompt-injection patterns and validates outputs against Atlas grounding. We catalogue and test against known evasion techniques. No defence is perfect; the customer should treat AI output as untrusted input to its own systems unless the customer has independently validated it.

14. Changes to this policy

We may update this page as the model landscape, regulation, and our environment evolve. Material changes are reflected in the "Last updated" date. For binding commitments — what we promise to a specific customer — see the engagement, not this page.

Privacy & AI policy questions: privacy@jubi.my · Security: security@jubi.my