Docker/Podman Compose Deployment
Deploy TrustGraph quickly using Docker Compose for local development and testing environments.
Overview
Docker Compose provides the easiest way to get TrustGraph running locally with all required services orchestrated together. This deployment method is ideal for:
- Local development and testing
- Proof-of-concept implementations
- Small-scale deployments
- Learning and experimentation
Prerequisites
System Requirements
- Docker Engine or Podman Machine installed and running
- Operating System: Linux or macOS (Windows deployments not tested)
- Python 3.x for CLI tools
- Sufficient system resources (recommended: 8GB RAM, 4 CPU cores)
Installation Links
Note: If using Podman, substitute
podman
fordocker
in all commands.
Configuration Setup
1. Create Configuration
Use the TrustGraph Configuration Builder to generate your deployment configuration:
- Select Deployment: Choose Docker Compose or Podman Compose
- Graph Store: Select Cassandra (recommended for ease of use)
- Vector Store: Select Qdrant (recommended for ease of use)
- Chunker Settings:
- Type: Recursive
- Chunk size: 1000
- Overlap: 50
- LLM Model: Choose your preferred model:
- Local: LMStudio or Ollama for local GPU deployment
- Cloud: VertexAI on Google (offers free credits)
- Output Tokens: 2048 (safe default)
- Customization: Enable LLM Prompt Manager and Agent Tools
- Generate: Download the deployment bundle
2. Install CLI Tools
python3 -m venv env
source env/bin/activate
pip install trustgraph-cli
Quick Start
1. Launch TrustGraph
docker-compose -f docker-compose.yaml up -d
2. Wait for Initialization
Allow 120 seconds for all services to stabilize. Services like Pulsar and Cassandra need time to initialize properly.
3. Verify Installation
Check that processors have started:
tg-show-processor-state
Verify all containers are running:
docker ps
Check that flows are available:
tg-show-flows
4. Load Sample Data
tg-load-sample-documents
Services & Interfaces
Web Workbench
Access the TrustGraph workbench at http://localhost:8888/
Features:
- Document library management
- Vector search interface
- Graph visualization
- Graph RAG query interface
- Prompt management
Monitoring Dashboard
Access Grafana monitoring at http://localhost:3000/
Default credentials:
- Username:
admin
- Password:
admin
Features:
- TrustGraph dashboard
- Processing metrics
- System health monitoring
- Document processing backlog
Working with Documents
1. Load Documents
Via Workbench:
- Navigate to the Library page
- Select a document (e.g., “Beyond State Vigilance”)
- Click Submit on the action bar
- Choose a processing flow (use default)
- Click Submit to process
Via CLI:
tg-load-pdf path/to/document.pdf
tg-load-text path/to/document.txt
2. Verify Knowledge Graph
Check graph parsing results:
tg-show-graph
This displays semantic triples in N-Triples format:
<http://trustgraph.ai/e/enterprise> <http://trustgraph.ai/e/was-carried> "to altitude and released for a gliding approach" .
<http://trustgraph.ai/e/enterprise> <http://www.w3.org/2000/01/rdf-schema#label> "Enterprise" .
3. Query with Graph RAG
Via Workbench:
- Navigate to Graph RAG tab
- Enter your question (e.g., “What is this document about?”)
- View contextual responses
Via CLI:
tg-invoke-graph-rag "What is this document about?"
Troubleshooting
Common Issues
Services Not Starting:
- Wait 120 seconds for full initialization
- Check container status:
docker ps -a
- Review logs:
docker-compose logs [service-name]
Memory Issues:
- Ensure sufficient RAM (8GB recommended)
- Monitor resource usage:
docker stats
Connection Issues:
- Verify ports are available (8888, 3000)
- Check firewall settings
- Ensure Docker daemon is running
Debugging Commands
# Check all containers
docker ps -a
# View logs for specific service
docker-compose logs [service-name]
# Check system resources
docker stats
# Verify TrustGraph flows
tg-show-flows
# Check processor state
tg-show-processor-state
Shutdown
Clean Shutdown
docker-compose -f docker-compose.yaml down -v -t 0
Verify Cleanup
# Confirm no containers running
docker ps
# Confirm volumes removed
docker volume ls
Next Steps
- Production Deployment: See Production Considerations
- Cloud Deployment: Explore AWS, GCP, or Scaleway guides
- Advanced Configuration: Check Security Considerations
- Scaling: Review Minikube for Kubernetes deployment