CosmicMind Python SDK¶
Persistent Memory for AI Systems
CosmicMind is an AI context management platform that gives neural networks persistent, structured memory. Unlike traditional stateless AI interactions, CosmicMind maintains relationships, learns from conversations, and builds contextual understanding over time—making any neural network system smarter, more personalized, and more capable.
What is CosmicMind?¶
CosmicMind transforms ephemeral AI conversations into persistent knowledge. By capturing entities, relationships, and context across interactions, it enables AI systems to:
- Remember previous conversations and learned information
- Understand complex relationships between people, places, and concepts
- Personalize responses based on accumulated user context
- Scale contextual understanding beyond token limits
- Connect information across multiple conversations and users
How It Improves Neural Network Systems¶
Traditional LLMs are stateless—they forget everything after each conversation. CosmicMind adds persistent memory infrastructure that:
- Extracts entities and relationships from conversations automatically
- Stores knowledge in CosmicMind's optimized data infrastructure for fast retrieval
- Injects relevant context into prompts based on conversation history
- Maintains user-specific memory across sessions and interactions
- Enables multi-user knowledge sharing while preserving privacy
This means your AI applications can provide context-aware responses that reference past interactions, understand user preferences, and maintain coherent long-term conversations—all without manual prompt engineering or hitting token limits.
Use Cases¶
- AI Assistants: Build assistants that remember user preferences and history
- Customer Support: Provide personalized support based on customer history
- Game Masters: Create NPCs with persistent memory and evolving storylines
- Education: Track student progress and adapt teaching approaches
- Research: Maintain knowledge connections of research findings and relationships
- Enterprise: Enable AI that understands your organization's context
Installation¶
Or install from source:
Features¶
- Structured Memory: Persistent context stored securely in CosmicMind's proprietary infrastructure
- Multi-LLM Support: Works with Cerebras, OpenAI, Anthropic, Google, and more
- Avatar System: Create AI personalities with custom traits and knowledge
- Private & Secure: User-specific memory isolation with API key authentication
- Fast: Optimized context retrieval for low-latency responses
- Token Tracking: Built-in usage monitoring and cost estimation
- Simple API: Minimal code to add persistent memory to any AI app
- Type Safety: Pydantic models for validation and IDE autocomplete
Type Safety with Pydantic Models¶
The SDK supports Pydantic models for type-safe API interactions with client-side validation:
from cosmicmind import CosmicMindClient
from cosmicmind.models import ChatRequest, ChatResponse
client = CosmicMindClient(
api_key="sk-your-key",
base_url="https://cosmicmind.pansynapse.com/api"
)
# Type-safe request with validation
request = ChatRequest(
messages=["Hello, who am I?"],
user_id="alice_123",
llm="cerebras",
llm_model="llama-3.3-70b"
)
# Returns typed ChatResponse
response: ChatResponse = client.chat.send(request)
# IDE autocomplete works!
print(response.message)
print(f"Tokens used: {response.token_usage['total_tokens']}")
Benefits: - Client-side validation - Catch errors before API calls - IDE autocomplete - Know exactly what fields are available - Type hints - Better code quality and fewer bugs - Self-documenting - Clear field names and descriptions
Backward Compatible: Dict-based and legacy parameter styles still work!
Supported LLM Providers¶
CosmicMind integrates with multiple LLM providers, allowing you to choose the best model for your use case:
| Provider | Models | Best For |
|---|---|---|
| Cerebras | llama-3.3-70b, llama-3.1-8b | Fast, affordable inference |
| OpenAI | gpt-4, gpt-4-turbo, gpt-3.5-turbo | General-purpose AI tasks |
| Anthropic | claude-3-opus, claude-3-sonnet | Long-form content, analysis |
| gemini-2.0-flash, gemini-pro | Multimodal, quick responses | |
| Perplexity | sonar-medium, sonar-small | Web-grounded responses |
All models benefit from CosmicMind's persistent context management.
Support¶
- Documentation: https://docs.pansynapse.com/cosmicmind
- Email: support@pansynapse.com
- Issues: https://github.com/pansynapse/CosmicMind-SDK/issues
License¶
Proprietary - See LICENSE file for details.
Copyright (c) 2025 CosmicMind. All rights reserved.