TrustGraph Containers
TrustGraph uses a modular container architecture where different containers provide specialized capabilities. This approach allows for flexible deployment where only needed capabilities are included, reducing resource usage and attack surface while maintaining full functionality when all containers are deployed together.
Version strategy
The TrustGraph release process creates containers and packages with the same version number. For best results, match the container and package version numbers.
Container Overview
Container | Purpose | Key Features | Use Case |
---|---|---|---|
trustgraph-base | Foundation container with basic building blocks, client APIs, and base classes | • Core Python runtime (3.12) • Basic HTTP client capabilities ( aiohttp )• Pulsar messaging system integration • Foundational libraries for other containers | Required as a base layer for other containers. Contains minimal dependencies focused on core messaging and HTTP capabilities. |
trustgraph-flow | Main processing container containing the bulk of TrustGraph’s capabilities | • Multi-provider AI integration (OpenAI, Anthropic, Cohere, Mistral, Google Generative AI, Ollama) • LangChain ecosystem with complete text processing • Database support (Milvus, Neo4j, FalkorDB, Cassandra) • RDF/Semantic web capabilities • Document processing and analysis | Core container for most TrustGraph workflows. Deploy when you need full AI processing capabilities, document handling, or database integration. |
trustgraph-mcp | Model Context Protocol (MCP) server functionality | • MCP server implementation • WebSocket-based communication • Lightweight protocol handling | Deploy when you need MCP server capabilities for model context management and protocol-based communication. |
trustgraph-hf | Hugging Face model processing with local ML inference | • PyTorch support (CPU-optimized) • Hugging Face integration (Transformers, sentence transformers, embeddings) • Local ML inference without external API calls • Pre-loaded models (all-MiniLM-L6-v2) | Deploy when you need local ML model inference, text embeddings, or want to avoid external API dependencies for certain AI tasks. |
trustgraph-ocr | Optical Character Recognition and document processing | • Tesseract OCR for text extraction from images • PDF processing with Poppler utilities • Complete document processing pipeline | Deploy when you need to process scanned documents, extract text from images, or handle PDF document analysis. |
trustgraph-bedrock | AWS Bedrock AI services integration | • AWS Bedrock model access • Cloud-based AI inference • Lightweight AWS-specific integration | Deploy when using AWS Bedrock as your AI provider. Provides dedicated integration without the overhead of other AI providers. |
trustgraph-vertexai | Google Vertex AI integration | • Google Cloud Vertex AI model access • Cloud-based AI inference • Google AI Platform SDK integration | Deploy when using Google Vertex AI as your AI provider. Provides dedicated integration for Google’s AI/ML platform. |
Architecture Principles
Modular Design
Each container is purpose-built for specific AI providers or capabilities. This allows you to:
- Mix and match containers based on deployment needs
- Reduce resource usage by only including necessary dependencies
- Minimize attack surface by avoiding unused components
- Scale individual components independently
Common Foundation
All containers share common patterns:
- Base OS: Fedora 42 for security and stability
- Python Runtime: Python 3.12 for modern language features
- Messaging: Pulsar messaging system for distributed communication
- Build Strategy: Multi-stage builds for optimized container sizes
Deployment Flexibility
Minimal Deployment:
trustgraph-base
as a base for extensiontrustgraph-flow
for the most common AI capabilitiestrustgraph-mcp
for MCP protocol suppport
Document Processing:
- Add
trustgraph-ocr
for document OCR with Tesseract
Local ML Processing:
- Add
trustgraph-hf
for local model inference without external APIs
Cloud AI Integration:
- Add
trustgraph-bedrock
for AWS Bedrock - Add
trustgraph-vertexai
for Google Vertex AI (Google AIStudio is supported intrustgraph-flow
.
Container Dependencies
trustgraph-base (foundation)
├── trustgraph-flow (most of the capability is here)
├── trustgraph-hf (HuggingFace, local ML, transformers model)
├── trustgraph-ocr (tesseract)
├── trustgraph-bedrock (AWS Bedrock)
└── trustgraph-vertexai (Google AI with VertexAI libraries)
trustgraph-mcp (MCP protocol server)
Most containers depend on trustgraph-base
for core functionality, while specialized containers can be deployed independently based on your specific requirements.