Back to Documentation
Reference
Updated Dec 05, 2023

AI & RAG Security Standards

Best practices for implementing Large Language Models and RAG systems while maintaining strict data privacy.

Technical Resource

Data Sanitization

Before sending data to any LLM (even private ones), PII (Personally Identifiable Information) must be redacted. We implement automated masking layers at the ingestion point.

Vector Database ACLs

Access Control Lists must be maintained at the vector level. A user should only be able to retrieve context from documents they have permissions to view in the original source system.

Hallucination Guardrails

We use validation chains to verify LLM outputs against the retrieved context. If a response cannot be grounded in the provided data, the system is programmed to acknowledge uncertainty rather than hallucinate.

Was this article helpful?

Have feedback? Let us know.