Skip to content

API Reference

Complete API reference for the CosmicMind Python SDK.

CosmicMindClient

Main client class for interacting with CosmicMind.

from cosmicmind import CosmicMindClient

client = CosmicMindClient(
    api_key: str,              # Your API key (required)
    base_url: str = "http://localhost:8000",  # API base URL
    api_version: str = "v1"    # API version (defaults to v1) - sent in X-API-Version header
)

Note: API versioning is handled via the X-API-Version header, not in request payloads. The api_version parameter sets this header for all requests made by the client.

Methods

health()

Check API health status.

health = client.health()
# Returns: Dict[str, Any]

Chat API

Access via client.chat.

send()

Send a chat message and get AI response.

Parameters:

The send() method accepts parameters in three different styles:

Style 1: Pydantic Model (Recommended)

from cosmicmind.models import ChatRequest

request = ChatRequest(
    messages=["Hello!"],              # Required: List of messages
    user_id="alice",                # Default: "default_user"
    llm="cerebras",                   # Default: "cerebras"
    llm_model="llama-3.3-70b",       # Default: "llama-3.3-70b"
    llm_api_key=None,                 # Optional: Organization LLM API key
    max_tokens=None                   # Optional: Max tokens in response
)
response = client.chat.send(request)

Style 2: Dictionary

response = client.chat.send({
    "messages": ["Hello!"],
    "user_id": "alice",
    "llm": "cerebras",
    "llm_model": "llama-3.3-70b"
})

Style 3: Legacy Parameters (Backward Compatible)

response = client.chat.send(
    message="Hello!",                 # Single message string
    user_id="alice",                  # Default: "default_user"
    llm="cerebras",                   # Default: "cerebras"
    model="llama-3.3-70b",           # Default: "llama-3.3-70b"
    llm_api_key=None,                 # Optional
    max_tokens=None                   # Optional
)

Parameter Details: - request - ChatRequest model or dict (new style, recommended) - message - Single message string (legacy style, use with other kwargs) - user_id - User identifier for context tracking (default: "default_user") - llm - LLM provider: "cerebras", "openai", "anthropic", "google", "perplexity" (default: "cerebras") - llm_api_key - Optional organization-owned API key for the LLM provider - model - Model name (default: "llama-3.3-70b") - max_tokens - Maximum tokens in response (optional, uses provider default if not set)

Returns: ChatResponse with AI message and metadata


Avatar API

Access via client.avatars.

list()

List all available avatars.

Returns: AvatarListResponse

Example:

avatars = client.avatars.list()
print(f"Found {avatars.count} avatars")

get(avatar_id: str)

Get details about a specific avatar.

Parameters: - avatar_id (str): Avatar identifier

Returns: Dict[str, Any]

Example:

avatar = client.avatars.get("avatar_123")

create(avatar_data: Union[AvatarImportRequest, Dict[str, Any]])

Create a new avatar.

Parameters: - avatar_data: AvatarImportRequest model or dict with avatar configuration

Returns: Dict[str, Any]

Example:

# avatar_id is NOT provided - it's generated and returned
avatar = client.avatars.create({
    "name": "TechExpert",
    "personality": {"traits": ["helpful"]}
})
# avatar["avatar_id"] contains the generated UUID
print(f"Created avatar with ID: {avatar['avatar_id']}")

update(avatar_id: str, avatar_data: Dict[str, Any])

Update an existing avatar.

Parameters: - avatar_id (str): Avatar identifier - avatar_data (Dict[str, Any]): Updated configuration

Returns: Dict[str, Any]

Example:

client.avatars.update("avatar_123", {
    "personality": "more cheerful"
})

delete(avatar_id: str)

Delete an avatar.

Parameters: - avatar_id (str): Avatar identifier

Returns: Dict[str, Any]

Example:

client.avatars.delete("avatar_123")

chat(avatar_id: str, request: Optional[Union[AvatarChatRequest, Dict[str, Any]]] = None, **kwargs)

Chat with a specific avatar.

Parameters:

Style 1: Pydantic Model (Recommended)

from cosmicmind.models import AvatarChatRequest

request = AvatarChatRequest(
    avatar_id="tech_expert",          # Required: Avatar identifier
    messages=["Hello!"],              # Required: List of messages
    user_id="alice",                  # Default: "default_user"
    llm="cerebras",                   # Default: "cerebras"
    avatar_context_only=False,        # Default: False
    preserve_avatar_voice=True        # Default: True
)
response = client.avatars.chat("tech_expert", request)

Style 2: Dictionary

response = client.avatars.chat("tech_expert", {
    "messages": ["Hello!"],
    "user_id": "alice"
})

Style 3: Legacy Parameters (Backward Compatible)

response = client.avatars.chat(
    "tech_expert",                    # Required: Avatar identifier
    message="Hello!",                  # Message string
    user_id="alice"                   # Default: "default_user"
)

Parameter Details: - avatar_id - Avatar identifier (required, first positional argument) - request - AvatarChatRequest model or dict (optional, new style) - message - Single message string (legacy style, use with other kwargs) - user_id - User identifier for context tracking (default: "default_user") - Additional parameters: Same as ChatRequest (llm, llm_model, max_tokens, etc.)

Returns: ChatResponse with AI message and metadata


Models

ChatRequest

Request model for chat interactions.

from cosmicmind.models import ChatRequest

request = ChatRequest(
    messages: List[str],           # Required
    user_id: str = "default_user",
    llm: str = "cerebras",
    llm_model: str = "llama-3.3-70b",
    llm_api_key: Optional[str] = None,
    db_id: str = "default",
    timestamp: str = ...,           # Auto-generated
    overwrite: bool = False,
    max_tokens: Optional[int] = None
)

ChatResponse

Response model for chat interactions.

from cosmicmind.models import ChatResponse

response: ChatResponse = client.chat.send(request)
# response.message: str
# response.success: bool
# response.request_id: str
# response.token_usage: Optional[Dict[str, Any]]
# response.model_used: Optional[str]
# response.provider: Optional[str]

AvatarImportRequest

Request model for importing/creating avatars.

from cosmicmind.models import AvatarImportRequest

request = AvatarImportRequest(
    name: str,                     # Required
    version: str = "1.0",          # Avatar version (stored in DynamoDB), NOT the API version
    personality: Dict[str, Any],   # Required
    beliefs: List[Dict[str, str]] = [],
    communication_patterns: List[Dict[str, str]] = [],
    knowledge_domains: List[str] = []
)

Note: The version field in AvatarImportRequest is the avatar's version (e.g., "1.0", "2.0") stored in DynamoDB, not the API version. API versioning is handled via the X-API-Version header set when initializing the client.

AvatarChatRequest

Extended chat request for avatar interactions.

from cosmicmind.models import AvatarChatRequest

request = AvatarChatRequest(
    avatar_id: str,                # Required
    messages: List[str],           # Required
    user_id: str = "default_user",
    avatar_context_only: bool = False,
    preserve_avatar_voice: bool = True,
    # ... all ChatRequest fields ...
)

AvatarListResponse

Response model for listing avatars.

from cosmicmind.models import AvatarListResponse

avatars: AvatarListResponse = client.avatars.list()
# avatars.avatars: List[Dict[str, Any]]
# avatars.count: int

Exceptions

  • CosmicMindError: Base exception for all errors
  • AuthenticationError: Invalid or missing API key
  • RateLimitError: Rate limit exceeded
  • ServiceNotAvailableError: Service not included in your license
  • ValidationError: Invalid request data
  • ServerError: Server-side error (500+)

Code Documentation

For detailed code documentation, see the auto-generated API docs below:

CosmicMindClient

CosmicMind API Client

Main client for interacting with CosmicMind services.

Parameters:

Name Type Description Default
api_key str

Your CosmicMind API key

required
base_url str

API base URL (default: http://localhost:8000 for local dev) Production: https://cosmicmind.pansynapse.com/api (must include /api)

'http://localhost:8000'
api_version str

API version to use (default: v1)

'v1'
Example

from cosmicmind import CosmicMindClient

Production

client = CosmicMindClient( ... api_key="sk-your-key", ... base_url="https://cosmicmind.pansynapse.com/api" ... )

Local development

client = CosmicMindClient(api_key="sk-your-key") response = client.chat.send("Hello!")

Source code in python/cosmicmind/client.py
class CosmicMindClient:
    """
    CosmicMind API Client

    Main client for interacting with CosmicMind services.

    Args:
        api_key: Your CosmicMind API key
        base_url: API base URL (default: http://localhost:8000 for local dev)
                  Production: https://cosmicmind.pansynapse.com/api (must include /api)
        api_version: API version to use (default: v1)

    Example:
        >>> from cosmicmind import CosmicMindClient
        >>> # Production
        >>> client = CosmicMindClient(
        ...     api_key="sk-your-key",
        ...     base_url="https://cosmicmind.pansynapse.com/api"
        ... )
        >>> # Local development
        >>> client = CosmicMindClient(api_key="sk-your-key")
        >>> response = client.chat.send("Hello!")
    """

    def __init__(
        self,
        api_key: str,
        base_url: str = "http://localhost:8000",
        api_version: str = "v1",
    ):
        if not api_key:
            raise AuthenticationError("API key is required")

        self.api_key = api_key
        self.base_url = base_url.rstrip("/")
        self.api_version = api_version

        # Initialize sub-APIs
        self.chat = ChatAPI(self)
        self.avatars = AvatarAPI(self)

    def _get_headers(self) -> Dict[str, str]:
        """Get HTTP headers for API requests"""
        return {
            "x-api-key": self.api_key,
            "Content-Type": "application/json",
            "X-API-Version": self.api_version,
            "User-Agent": f"CosmicMind-Python-SDK/3.0.0",
        }

    def _handle_response(self, response: requests.Response) -> Dict[str, Any]:
        """
        Handle API response and raise appropriate exceptions

        Args:
            response: requests Response object

        Returns:
            Parsed JSON response

        Raises:
            Various CosmicMindError subclasses based on status code
        """
        # Success responses
        if response.status_code in (200, 201):
            return response.json()
        if response.status_code == 204:
            # No content (e.g., DELETE success)
            return {}

        # Error handling
        try:
            error_data = response.json()
            error_message = error_data.get("detail", "Unknown error")
        except:
            error_message = response.text

        if response.status_code == 401:
            raise AuthenticationError(f"Authentication failed: {error_message}")
        elif response.status_code == 403:
            raise ServiceNotAvailableError(f"Service not available: {error_message}")
        elif response.status_code == 422:
            raise ValidationError(f"Invalid request: {error_message}")
        elif response.status_code == 429:
            raise RateLimitError(f"Rate limit exceeded: {error_message}")
        elif response.status_code >= 500:
            raise ServerError(f"Server error: {error_message}")
        else:
            raise CosmicMindError(f"Request failed: {error_message}")

    def _get(self, endpoint: str) -> Dict[str, Any]:
        """Make GET request to API"""
        url = f"{self.base_url}{endpoint}"
        response = requests.get(url, headers=self._get_headers())
        return self._handle_response(response)

    def _post(self, endpoint: str, data: Dict[str, Any]) -> Dict[str, Any]:
        """Make POST request to API"""
        url = f"{self.base_url}{endpoint}"
        response = requests.post(url, json=data, headers=self._get_headers())
        return self._handle_response(response)

    def _put(self, endpoint: str, data: Dict[str, Any]) -> Dict[str, Any]:
        """Make PUT request to API"""
        url = f"{self.base_url}{endpoint}"
        response = requests.put(url, json=data, headers=self._get_headers())
        return self._handle_response(response)

    def _delete(self, endpoint: str) -> Dict[str, Any]:
        """Make DELETE request to API"""
        url = f"{self.base_url}{endpoint}"
        response = requests.delete(url, headers=self._get_headers())
        return self._handle_response(response)

    def health(self) -> Dict[str, Any]:
        """
        Check API health status

        Returns:
            Health status dictionary

        Example:
            >>> health = client.health()
            >>> print(health['status'])
            'healthy'
        """
        return self._get("/health")

Functions

health

health() -> Dict[str, Any]

Check API health status

Returns:

Type Description
Dict[str, Any]

Health status dictionary

Example

health = client.health() print(health['status']) 'healthy'

Source code in python/cosmicmind/client.py
def health(self) -> Dict[str, Any]:
    """
    Check API health status

    Returns:
        Health status dictionary

    Example:
        >>> health = client.health()
        >>> print(health['status'])
        'healthy'
    """
    return self._get("/health")

ChatAPI

Chat API client

Handles all chat-related operations.

Source code in python/cosmicmind/client.py
class ChatAPI:
    """
    Chat API client

    Handles all chat-related operations.
    """

    def __init__(self, client: "CosmicMindClient"):
        self.client = client

    def send(
        self,
        request: Optional[Union[ChatRequest, Dict[str, Any]]] = None,
        # Legacy parameters for backward compatibility
        message: Optional[str] = None,
        user_id: str = "default_user",
        llm: str = "cerebras",
        llm_api_key: Optional[str] = None,
        model: str = "llama-3.3-70b",
        max_tokens: Optional[int] = None,
        **kwargs,
    ) -> ChatResponse:
        """
        Send a chat message and get AI response

        Supports multiple input styles for backward compatibility:
        1. Pydantic model (recommended)
        2. Dictionary
        3. Legacy parameters

        Args:
            request: ChatRequest model or dict (new style)
            message: Single message string (legacy style)
            user_id: User identifier for context tracking
            llm: LLM provider (cerebras, openai, anthropic, google)
            llm_api_key: Organization owned API key for the LLM provider
            model: Model name
            max_tokens: Maximum tokens in response
            **kwargs: Additional parameters

        Returns:
            ChatResponse with AI message and metadata

        Examples:
            # Style 1: Pydantic model (recommended)
            >>> from cosmicmind.models import ChatRequest
            >>> request = ChatRequest(
            ...     messages=["Hello!"],
            ...     user_id="alice",
            ...     llm="cerebras"
            ... )
            >>> response = client.chat.send(request)

            # Style 2: Dict
            >>> response = client.chat.send({
            ...     "messages": ["Hello!"],
            ...     "user_id": "alice"
            ... })

            # Style 3: Legacy parameters (backward compatible)
            >>> response = client.chat.send(
            ...     message="Hello!",
            ...     user_id="alice"
            ... )
        """
        # Handle different input styles
        if request is not None:
            if isinstance(request, dict):
                # Convert dict to Pydantic model (validates!)
                chat_request = ChatRequest(**request)
            elif isinstance(request, ChatRequest):
                chat_request = request
            else:
                raise TypeError("request must be ChatRequest or dict")
        else:
            # Legacy style - build from parameters
            if message is None:
                raise ValueError("Either 'request' or 'message' must be provided")

            chat_request = ChatRequest(
                messages=[message],
                user_id=user_id,
                llm=llm,
                llm_api_key=llm_api_key,
                llm_model=model,
                max_tokens=max_tokens,
                **kwargs,
            )

        # Convert to dict for API
        payload = chat_request.model_dump(exclude_none=True)

        # Send to API
        response_dict = self.client._post("/chat", payload)

        # Return as Pydantic model
        return ChatResponse(**response_dict)

Functions

send

send(request: Optional[Union[ChatRequest, Dict[str, Any]]] = None, message: Optional[str] = None, user_id: str = 'default_user', llm: str = 'cerebras', llm_api_key: Optional[str] = None, model: str = 'llama-3.3-70b', max_tokens: Optional[int] = None, **kwargs) -> ChatResponse

Send a chat message and get AI response

Supports multiple input styles for backward compatibility: 1. Pydantic model (recommended) 2. Dictionary 3. Legacy parameters

Parameters:

Name Type Description Default
request Optional[Union[ChatRequest, Dict[str, Any]]]

ChatRequest model or dict (new style)

None
message Optional[str]

Single message string (legacy style)

None
user_id str

User identifier for context tracking

'default_user'
llm str

LLM provider (cerebras, openai, anthropic, google)

'cerebras'
llm_api_key Optional[str]

Organization owned API key for the LLM provider

None
model str

Model name

'llama-3.3-70b'
max_tokens Optional[int]

Maximum tokens in response

None
**kwargs

Additional parameters

{}

Returns:

Type Description
ChatResponse

ChatResponse with AI message and metadata

Examples:

>>> from cosmicmind.models import ChatRequest
>>> request = ChatRequest(
...     messages=["Hello!"],
...     user_id="alice",
...     llm="cerebras"
... )
>>> response = client.chat.send(request)
Style 2: Dict
>>> response = client.chat.send({
...     "messages": ["Hello!"],
...     "user_id": "alice"
... })
Style 3: Legacy parameters (backward compatible)
>>> response = client.chat.send(
...     message="Hello!",
...     user_id="alice"
... )
Source code in python/cosmicmind/client.py
def send(
    self,
    request: Optional[Union[ChatRequest, Dict[str, Any]]] = None,
    # Legacy parameters for backward compatibility
    message: Optional[str] = None,
    user_id: str = "default_user",
    llm: str = "cerebras",
    llm_api_key: Optional[str] = None,
    model: str = "llama-3.3-70b",
    max_tokens: Optional[int] = None,
    **kwargs,
) -> ChatResponse:
    """
    Send a chat message and get AI response

    Supports multiple input styles for backward compatibility:
    1. Pydantic model (recommended)
    2. Dictionary
    3. Legacy parameters

    Args:
        request: ChatRequest model or dict (new style)
        message: Single message string (legacy style)
        user_id: User identifier for context tracking
        llm: LLM provider (cerebras, openai, anthropic, google)
        llm_api_key: Organization owned API key for the LLM provider
        model: Model name
        max_tokens: Maximum tokens in response
        **kwargs: Additional parameters

    Returns:
        ChatResponse with AI message and metadata

    Examples:
        # Style 1: Pydantic model (recommended)
        >>> from cosmicmind.models import ChatRequest
        >>> request = ChatRequest(
        ...     messages=["Hello!"],
        ...     user_id="alice",
        ...     llm="cerebras"
        ... )
        >>> response = client.chat.send(request)

        # Style 2: Dict
        >>> response = client.chat.send({
        ...     "messages": ["Hello!"],
        ...     "user_id": "alice"
        ... })

        # Style 3: Legacy parameters (backward compatible)
        >>> response = client.chat.send(
        ...     message="Hello!",
        ...     user_id="alice"
        ... )
    """
    # Handle different input styles
    if request is not None:
        if isinstance(request, dict):
            # Convert dict to Pydantic model (validates!)
            chat_request = ChatRequest(**request)
        elif isinstance(request, ChatRequest):
            chat_request = request
        else:
            raise TypeError("request must be ChatRequest or dict")
    else:
        # Legacy style - build from parameters
        if message is None:
            raise ValueError("Either 'request' or 'message' must be provided")

        chat_request = ChatRequest(
            messages=[message],
            user_id=user_id,
            llm=llm,
            llm_api_key=llm_api_key,
            llm_model=model,
            max_tokens=max_tokens,
            **kwargs,
        )

    # Convert to dict for API
    payload = chat_request.model_dump(exclude_none=True)

    # Send to API
    response_dict = self.client._post("/chat", payload)

    # Return as Pydantic model
    return ChatResponse(**response_dict)

AvatarAPI

Avatar API client

Handles all avatar-related operations.

Source code in python/cosmicmind/client.py
class AvatarAPI:
    """
    Avatar API client

    Handles all avatar-related operations.
    """

    def __init__(self, client: "CosmicMindClient"):
        self.client = client

    def list(self) -> AvatarListResponse:
        """
        List all available avatars

        Returns:
            AvatarListResponse with list of avatars and count

        Example:
            >>> avatars = client.avatars.list()
            >>> print(f"Found {avatars.count} avatars")
            >>> for avatar in avatars.avatars:
            ...     print(avatar['name'])
        """
        response = self.client._get("/avatars")
        return AvatarListResponse(**response)

    def get(self, avatar_id: str) -> Dict[str, Any]:
        """
        Get details about a specific avatar

        Args:
            avatar_id: Avatar identifier

        Returns:
            Avatar details dictionary

        Example:
            >>> avatar = client.avatars.get("avatar_123")
            >>> print(avatar['name'])
        """
        return self.client._get(f"/avatars/{avatar_id}")

    def create(
        self, avatar_data: Union[AvatarImportRequest, Dict[str, Any]]
    ) -> Dict[str, Any]:
        """
        Create a new avatar

        Args:
            avatar_data: AvatarImportRequest model or dict with avatar configuration

        Returns:
            Created avatar with ID

        Examples:
            # Style 1: Pydantic model (recommended)
            >>> from cosmicmind.models import AvatarImportRequest
            >>> request = AvatarImportRequest(
            ...     name="TechExpert",
            ...     personality={"traits": ["helpful"]}
            ... )
            >>> avatar = client.avatars.create(request)
            >>> print(avatar["avatar_id"])  # Server-generated UUID

            # Style 2: Dict (validates internally)
            >>> avatar = client.avatars.create({
            ...     "name": "TechExpert",
            ...     "personality": {"traits": ["helpful"]}
            ... })
            >>> print(avatar["avatar_id"])  # Server-generated UUID
        """
        if isinstance(avatar_data, dict):
            avatar_request = AvatarImportRequest(**avatar_data)
        elif isinstance(avatar_data, AvatarImportRequest):
            avatar_request = avatar_data
        else:
            raise TypeError("avatar_data must be AvatarImportRequest or dict")

        payload = avatar_request.model_dump(exclude_none=True)
        return self.client._post("/avatars/", payload)

    def update(self, avatar_id: str, avatar_data: Dict[str, Any]) -> Dict[str, Any]:
        """
        Update an existing avatar

        Args:
            avatar_id: Avatar identifier
            avatar_data: Updated configuration

        Returns:
            Success message

        Example:
            >>> client.avatars.update("avatar_123", {
            ...     "personality": "more cheerful"
            ... })
        """
        return self.client._put(f"/avatars/{avatar_id}", avatar_data)

    def delete(self, avatar_id: str) -> Dict[str, Any]:
        """
        Delete an avatar

        Args:
            avatar_id: Avatar identifier

        Returns:
            Success message

        Example:
            >>> client.avatars.delete("avatar_123")
        """
        return self.client._delete(f"/avatars/{avatar_id}")

    def chat(
        self,
        avatar_id: str,
        request: Optional[Union[AvatarChatRequest, Dict[str, Any]]] = None,
        # Legacy parameters for backward compatibility
        message: Optional[str] = None,
        user_id: str = "default_user",
        **kwargs,
    ) -> ChatResponse:
        """
        Chat with a specific avatar

        Supports multiple input styles for backward compatibility:
        1. Pydantic model (recommended)
        2. Dictionary
        3. Legacy parameters

        Args:
            avatar_id (str): Avatar identifier (required, first positional argument)
            request (Optional[Union[AvatarChatRequest, Dict[str, Any]]]): 
                AvatarChatRequest model or dict (new style, recommended)
            message (Optional[str]): Single message string (legacy style, use with other kwargs)
            user_id (str): User identifier for context tracking (default: "default_user")
            **kwargs: Additional parameters (llm, llm_model, max_tokens, etc.)

        Returns:
            ChatResponse: Response with avatar's reply and metadata

        Examples:
            # Style 1: Pydantic model (recommended)
            >>> from cosmicmind.models import AvatarChatRequest
            >>> request = AvatarChatRequest(
            ...     avatar_id="tech_expert",
            ...     messages=["How do I optimize queries?"],
            ...     user_id="alice"
            ... )
            >>> response = client.avatars.chat("tech_expert", request)

            # Style 2: Dict
            >>> response = client.avatars.chat("tech_expert", {
            ...     "messages": ["Help me with Python"],
            ...     "user_id": "alice"
            ... })

            # Style 3: Legacy parameters (backward compatible)
            >>> response = client.avatars.chat(
            ...     "tech_expert",
            ...     message="Hello!",
            ...     user_id="alice"
            ... )
        """
        if request is not None:
            if isinstance(request, dict):
                # Ensure avatar_id is set
                request_dict = {"avatar_id": avatar_id, **request}
                chat_request = AvatarChatRequest(**request_dict)
            elif isinstance(request, AvatarChatRequest):
                chat_request = request
                # Override avatar_id if provided in function call
                chat_request.avatar_id = avatar_id
            else:
                raise TypeError("request must be AvatarChatRequest or dict")
        else:
            # Legacy style - build from parameters
            if message is None:
                raise ValueError("Either 'request' or 'message' must be provided")

            chat_request = AvatarChatRequest(
                avatar_id=avatar_id,
                messages=[message],
                user_id=user_id,
                **kwargs,
            )

        payload = chat_request.model_dump(exclude_none=True)
        response_dict = self.client._post(f"/avatars/{avatar_id}/chat", payload)
        return ChatResponse(**response_dict)

Functions

chat

chat(avatar_id: str, request: Optional[Union[AvatarChatRequest, Dict[str, Any]]] = None, message: Optional[str] = None, user_id: str = 'default_user', **kwargs) -> ChatResponse

Chat with a specific avatar

Supports multiple input styles for backward compatibility: 1. Pydantic model (recommended) 2. Dictionary 3. Legacy parameters

Parameters:

Name Type Description Default
avatar_id str

Avatar identifier (required, first positional argument)

required
request Optional[Union[AvatarChatRequest, Dict[str, Any]]]

AvatarChatRequest model or dict (new style, recommended)

None
message Optional[str]

Single message string (legacy style, use with other kwargs)

None
user_id str

User identifier for context tracking (default: "default_user")

'default_user'
**kwargs

Additional parameters (llm, llm_model, max_tokens, etc.)

{}

Returns:

Name Type Description
ChatResponse ChatResponse

Response with avatar's reply and metadata

Examples:

>>> from cosmicmind.models import AvatarChatRequest
>>> request = AvatarChatRequest(
...     avatar_id="tech_expert",
...     messages=["How do I optimize queries?"],
...     user_id="alice"
... )
>>> response = client.avatars.chat("tech_expert", request)
Style 2: Dict
>>> response = client.avatars.chat("tech_expert", {
...     "messages": ["Help me with Python"],
...     "user_id": "alice"
... })
Style 3: Legacy parameters (backward compatible)
>>> response = client.avatars.chat(
...     "tech_expert",
...     message="Hello!",
...     user_id="alice"
... )
Source code in python/cosmicmind/client.py
def chat(
    self,
    avatar_id: str,
    request: Optional[Union[AvatarChatRequest, Dict[str, Any]]] = None,
    # Legacy parameters for backward compatibility
    message: Optional[str] = None,
    user_id: str = "default_user",
    **kwargs,
) -> ChatResponse:
    """
    Chat with a specific avatar

    Supports multiple input styles for backward compatibility:
    1. Pydantic model (recommended)
    2. Dictionary
    3. Legacy parameters

    Args:
        avatar_id (str): Avatar identifier (required, first positional argument)
        request (Optional[Union[AvatarChatRequest, Dict[str, Any]]]): 
            AvatarChatRequest model or dict (new style, recommended)
        message (Optional[str]): Single message string (legacy style, use with other kwargs)
        user_id (str): User identifier for context tracking (default: "default_user")
        **kwargs: Additional parameters (llm, llm_model, max_tokens, etc.)

    Returns:
        ChatResponse: Response with avatar's reply and metadata

    Examples:
        # Style 1: Pydantic model (recommended)
        >>> from cosmicmind.models import AvatarChatRequest
        >>> request = AvatarChatRequest(
        ...     avatar_id="tech_expert",
        ...     messages=["How do I optimize queries?"],
        ...     user_id="alice"
        ... )
        >>> response = client.avatars.chat("tech_expert", request)

        # Style 2: Dict
        >>> response = client.avatars.chat("tech_expert", {
        ...     "messages": ["Help me with Python"],
        ...     "user_id": "alice"
        ... })

        # Style 3: Legacy parameters (backward compatible)
        >>> response = client.avatars.chat(
        ...     "tech_expert",
        ...     message="Hello!",
        ...     user_id="alice"
        ... )
    """
    if request is not None:
        if isinstance(request, dict):
            # Ensure avatar_id is set
            request_dict = {"avatar_id": avatar_id, **request}
            chat_request = AvatarChatRequest(**request_dict)
        elif isinstance(request, AvatarChatRequest):
            chat_request = request
            # Override avatar_id if provided in function call
            chat_request.avatar_id = avatar_id
        else:
            raise TypeError("request must be AvatarChatRequest or dict")
    else:
        # Legacy style - build from parameters
        if message is None:
            raise ValueError("Either 'request' or 'message' must be provided")

        chat_request = AvatarChatRequest(
            avatar_id=avatar_id,
            messages=[message],
            user_id=user_id,
            **kwargs,
        )

    payload = chat_request.model_dump(exclude_none=True)
    response_dict = self.client._post(f"/avatars/{avatar_id}/chat", payload)
    return ChatResponse(**response_dict)

create

create(avatar_data: Union[AvatarImportRequest, Dict[str, Any]]) -> Dict[str, Any]

Create a new avatar

Parameters:

Name Type Description Default
avatar_data Union[AvatarImportRequest, Dict[str, Any]]

AvatarImportRequest model or dict with avatar configuration

required

Returns:

Type Description
Dict[str, Any]

Created avatar with ID

Examples:

>>> from cosmicmind.models import AvatarImportRequest
>>> request = AvatarImportRequest(
...     name="TechExpert",
...     personality={"traits": ["helpful"]}
... )
>>> avatar = client.avatars.create(request)
>>> print(avatar["avatar_id"])  # Server-generated UUID
Style 2: Dict (validates internally)
>>> avatar = client.avatars.create({
...     "name": "TechExpert",
...     "personality": {"traits": ["helpful"]}
... })
>>> print(avatar["avatar_id"])  # Server-generated UUID
Source code in python/cosmicmind/client.py
def create(
    self, avatar_data: Union[AvatarImportRequest, Dict[str, Any]]
) -> Dict[str, Any]:
    """
    Create a new avatar

    Args:
        avatar_data: AvatarImportRequest model or dict with avatar configuration

    Returns:
        Created avatar with ID

    Examples:
        # Style 1: Pydantic model (recommended)
        >>> from cosmicmind.models import AvatarImportRequest
        >>> request = AvatarImportRequest(
        ...     name="TechExpert",
        ...     personality={"traits": ["helpful"]}
        ... )
        >>> avatar = client.avatars.create(request)
        >>> print(avatar["avatar_id"])  # Server-generated UUID

        # Style 2: Dict (validates internally)
        >>> avatar = client.avatars.create({
        ...     "name": "TechExpert",
        ...     "personality": {"traits": ["helpful"]}
        ... })
        >>> print(avatar["avatar_id"])  # Server-generated UUID
    """
    if isinstance(avatar_data, dict):
        avatar_request = AvatarImportRequest(**avatar_data)
    elif isinstance(avatar_data, AvatarImportRequest):
        avatar_request = avatar_data
    else:
        raise TypeError("avatar_data must be AvatarImportRequest or dict")

    payload = avatar_request.model_dump(exclude_none=True)
    return self.client._post("/avatars/", payload)

delete

delete(avatar_id: str) -> Dict[str, Any]

Delete an avatar

Parameters:

Name Type Description Default
avatar_id str

Avatar identifier

required

Returns:

Type Description
Dict[str, Any]

Success message

Example

client.avatars.delete("avatar_123")

Source code in python/cosmicmind/client.py
def delete(self, avatar_id: str) -> Dict[str, Any]:
    """
    Delete an avatar

    Args:
        avatar_id: Avatar identifier

    Returns:
        Success message

    Example:
        >>> client.avatars.delete("avatar_123")
    """
    return self.client._delete(f"/avatars/{avatar_id}")

get

get(avatar_id: str) -> Dict[str, Any]

Get details about a specific avatar

Parameters:

Name Type Description Default
avatar_id str

Avatar identifier

required

Returns:

Type Description
Dict[str, Any]

Avatar details dictionary

Example

avatar = client.avatars.get("avatar_123") print(avatar['name'])

Source code in python/cosmicmind/client.py
def get(self, avatar_id: str) -> Dict[str, Any]:
    """
    Get details about a specific avatar

    Args:
        avatar_id: Avatar identifier

    Returns:
        Avatar details dictionary

    Example:
        >>> avatar = client.avatars.get("avatar_123")
        >>> print(avatar['name'])
    """
    return self.client._get(f"/avatars/{avatar_id}")

list

list() -> AvatarListResponse

List all available avatars

Returns:

Type Description
AvatarListResponse

AvatarListResponse with list of avatars and count

Example

avatars = client.avatars.list() print(f"Found {avatars.count} avatars") for avatar in avatars.avatars: ... print(avatar['name'])

Source code in python/cosmicmind/client.py
def list(self) -> AvatarListResponse:
    """
    List all available avatars

    Returns:
        AvatarListResponse with list of avatars and count

    Example:
        >>> avatars = client.avatars.list()
        >>> print(f"Found {avatars.count} avatars")
        >>> for avatar in avatars.avatars:
        ...     print(avatar['name'])
    """
    response = self.client._get("/avatars")
    return AvatarListResponse(**response)

update

update(avatar_id: str, avatar_data: Dict[str, Any]) -> Dict[str, Any]

Update an existing avatar

Parameters:

Name Type Description Default
avatar_id str

Avatar identifier

required
avatar_data Dict[str, Any]

Updated configuration

required

Returns:

Type Description
Dict[str, Any]

Success message

Example

client.avatars.update("avatar_123", { ... "personality": "more cheerful" ... })

Source code in python/cosmicmind/client.py
def update(self, avatar_id: str, avatar_data: Dict[str, Any]) -> Dict[str, Any]:
    """
    Update an existing avatar

    Args:
        avatar_id: Avatar identifier
        avatar_data: Updated configuration

    Returns:
        Success message

    Example:
        >>> client.avatars.update("avatar_123", {
        ...     "personality": "more cheerful"
        ... })
    """
    return self.client._put(f"/avatars/{avatar_id}", avatar_data)

ChatRequest

Bases: BaseModel

Request model for chat interactions

Matches cosmicmind-server's CosmicMindRequest schema. Provides client-side validation before sending to API.

Attributes:

Name Type Description
messages List[str]

List of user messages to process

user_id str

Unique identifier for context tracking

llm str

LLM provider (cerebras, openai, anthropic, google, perplexity)

llm_model str

Specific model name (e.g., llama-3.3-70b, gpt-4)

llm_api_key Optional[str]

Optional organization-owned LLM API key

db_id str

Database identifier for multi-tenancy

timestamp str

ISO format timestamp (auto-generated if not provided)

overwrite bool

Whether to overwrite existing context

max_tokens Optional[int]

Maximum tokens for response (None = provider default)

Example

from cosmicmind.models import ChatRequest request = ChatRequest( ... messages=["Hello, who am I?"], ... user_id="alice_123", ... llm="cerebras", ... llm_model="llama-3.3-70b" ... ) response = client.chat.send(request)

Source code in python/cosmicmind/models.py
class ChatRequest(BaseModel):
    """
    Request model for chat interactions

    Matches cosmicmind-server's CosmicMindRequest schema.
    Provides client-side validation before sending to API.

    Attributes:
        messages: List of user messages to process
        user_id: Unique identifier for context tracking
        llm: LLM provider (cerebras, openai, anthropic, google, perplexity)
        llm_model: Specific model name (e.g., llama-3.3-70b, gpt-4)
        llm_api_key: Optional organization-owned LLM API key
        db_id: Database identifier for multi-tenancy
        timestamp: ISO format timestamp (auto-generated if not provided)
        overwrite: Whether to overwrite existing context
        max_tokens: Maximum tokens for response (None = provider default)

    Example:
        >>> from cosmicmind.models import ChatRequest
        >>> request = ChatRequest(
        ...     messages=["Hello, who am I?"],
        ...     user_id="alice_123",
        ...     llm="cerebras",
        ...     llm_model="llama-3.3-70b"
        ... )
        >>> response = client.chat.send(request)
    """

    messages: List[str] = Field(..., description="User messages to process")
    user_id: str = Field(
        default="default_user", description="User identifier for context tracking"
    )
    llm: str = Field(
        default="cerebras",
        description="LLM provider (cerebras, openai, anthropic, google, perplexity)",
    )
    llm_model: str = Field(
        default="llama-3.3-70b", description="Specific model name"
    )
    llm_api_key: Optional[str] = Field(
        default=None, description="Organization-owned LLM API key (optional)"
    )
    db_id: str = Field(
        default="default", description="Database identifier for multi-tenancy"
    )
    timestamp: str = Field(
        default_factory=lambda: datetime.utcnow().isoformat() + "Z",
        description="Request timestamp in ISO format",
    )
    overwrite: bool = Field(
        default=False, description="Whether to overwrite existing context"
    )
    max_tokens: Optional[int] = Field(
        default=None, description="Maximum tokens for response (None = provider default)"
    )

    model_config = {
        "json_schema_extra": {
            "examples": [
                {
                    "messages": ["Hello, who am I?"],
                    "user_id": "alice_123",
                    "llm": "cerebras",
                    "llm_model": "llama-3.3-70b",
                    "db_id": "default",
                    "timestamp": "2025-01-01T12:00:00Z",
                }
            ]
        }
    }

ChatResponse

Bases: BaseModel

Response model for chat interactions

Matches cosmicmind-server's CosmicMindResponse schema.

Attributes:

Name Type Description
message str

AI-generated response text

success bool

Whether the request was processed successfully

code str

Error code if success is False

request_id str

Unique identifier for this request

debug_context Optional[Dict[str, Any]]

Debug information (only in debug mode)

token_usage Optional[Dict[str, Any]]

Token usage statistics with costs

model_used Optional[str]

The specific model that generated the response

provider Optional[str]

The LLM provider that was used

Example

response = client.chat.send(request) print(response.message) print(f"Tokens used: {response.token_usage['total_tokens']}")

Source code in python/cosmicmind/models.py
class ChatResponse(BaseModel):
    """
    Response model for chat interactions

    Matches cosmicmind-server's CosmicMindResponse schema.

    Attributes:
        message: AI-generated response text
        success: Whether the request was processed successfully
        code: Error code if success is False
        request_id: Unique identifier for this request
        debug_context: Debug information (only in debug mode)
        token_usage: Token usage statistics with costs
        model_used: The specific model that generated the response
        provider: The LLM provider that was used

    Example:
        >>> response = client.chat.send(request)
        >>> print(response.message)
        >>> print(f"Tokens used: {response.token_usage['total_tokens']}")
    """

    message: str = Field(..., description="AI-generated response text")
    success: bool = Field(..., description="Request success status")
    code: str = Field(default="", description="Error code if failed")
    request_id: str = Field(..., description="Unique request identifier")
    debug_context: Optional[Dict[str, Any]] = Field(
        default=None, description="Debug information (debug mode only)"
    )
    token_usage: Optional[Dict[str, Any]] = Field(
        default=None, description="Token usage statistics with costs"
    )
    model_used: Optional[str] = Field(
        default=None, description="Model that generated response"
    )
    provider: Optional[str] = Field(default=None, description="LLM provider used")

AvatarImportRequest

Bases: BaseModel

Request model for importing/creating avatars

Matches cosmicmind-server's AvatarImportRequest schema.

Attributes:

Name Type Description
name str

Display name of the avatar (required)

personality Dict[str, Any]

Core personality configuration dict (required)

avatar_id Optional[str]

Optional - if not provided, server generates a UUID

version str

Avatar version (for future updates)

beliefs List[Dict[str, str]]

List of avatar beliefs

communication_patterns List[Dict[str, str]]

Communication style patterns

knowledge_domains List[str]

Areas of expertise

Example

from cosmicmind.models import AvatarImportRequest request = AvatarImportRequest( ... name="TechExpert", ... personality={ ... "traits": ["helpful", "patient", "knowledgeable"], ... "speaking_style": "Clear and concise" ... }, ... knowledge_domains=["python", "apis", "databases"] ... ) avatar = client.avatars.create(request) print(avatar["avatar_id"]) # Server-generated UUID

Source code in python/cosmicmind/models.py
class AvatarImportRequest(BaseModel):
    """
    Request model for importing/creating avatars

    Matches cosmicmind-server's AvatarImportRequest schema.

    Attributes:
        name: Display name of the avatar (required)
        personality: Core personality configuration dict (required)
        avatar_id: Optional - if not provided, server generates a UUID
        version: Avatar version (for future updates)
        beliefs: List of avatar beliefs
        communication_patterns: Communication style patterns
        knowledge_domains: Areas of expertise

    Example:
        >>> from cosmicmind.models import AvatarImportRequest
        >>> request = AvatarImportRequest(
        ...     name="TechExpert",
        ...     personality={
        ...         "traits": ["helpful", "patient", "knowledgeable"],
        ...         "speaking_style": "Clear and concise"
        ...     },
        ...     knowledge_domains=["python", "apis", "databases"]
        ... )
        >>> avatar = client.avatars.create(request)
        >>> print(avatar["avatar_id"])  # Server-generated UUID
    """

    avatar_id: Optional[str] = Field(default=None, description="Optional - server generates UUID if not provided")
    name: str = Field(..., description="Avatar display name")
    version: str = Field(default="1.0", description="Avatar version")
    personality: Dict[str, Any] = Field(..., description="Personality configuration")
    beliefs: List[Dict[str, str]] = Field(
        default_factory=list, description="Avatar beliefs"
    )
    communication_patterns: List[Dict[str, str]] = Field(
        default_factory=list, description="Communication style patterns"
    )
    knowledge_domains: List[str] = Field(
        default_factory=list, description="Areas of expertise"
    )

    model_config = {
        "json_schema_extra": {
            "examples": [
                {
                    "name": "TechExpert",
                    "version": "1.0",
                    "personality": {
                        "traits": ["helpful", "patient", "knowledgeable"],
                        "speaking_style": "Clear and concise",
                        "background": "A technical expert who loves teaching",
                    },
                    "beliefs": [
                        {"belief": "Documentation is crucial for good code"},
                        {"belief": "Testing makes software reliable"},
                    ],
                    "communication_patterns": [
                        {
                            "pattern_type": "greeting",
                            "pattern": "Hello! How can I help with your technical questions?",
                        }
                    ],
                    "knowledge_domains": ["python", "apis", "databases", "cloud"],
                }
            ]
        }
    }

AvatarChatRequest

Bases: ChatRequest

Extended chat request for avatar interactions

Inherits from ChatRequest and adds avatar-specific fields. Matches cosmicmind-server's AvatarChatRequest schema.

Attributes:

Name Type Description
avatar_id str

Avatar to chat with

avatar_context_only bool

Use only avatar context, no user history

preserve_avatar_voice bool

Maintain avatar's speaking style

Example

from cosmicmind.models import AvatarChatRequest request = AvatarChatRequest( ... avatar_id="tech_expert", ... messages=["How do I optimize database queries?"], ... user_id="alice_123", ... preserve_avatar_voice=True ... ) response = client.avatars.chat(request)

Source code in python/cosmicmind/models.py
class AvatarChatRequest(ChatRequest):
    """
    Extended chat request for avatar interactions

    Inherits from ChatRequest and adds avatar-specific fields.
    Matches cosmicmind-server's AvatarChatRequest schema.

    Attributes:
        avatar_id: Avatar to chat with
        avatar_context_only: Use only avatar context, no user history
        preserve_avatar_voice: Maintain avatar's speaking style

    Example:
        >>> from cosmicmind.models import AvatarChatRequest
        >>> request = AvatarChatRequest(
        ...     avatar_id="tech_expert",
        ...     messages=["How do I optimize database queries?"],
        ...     user_id="alice_123",
        ...     preserve_avatar_voice=True
        ... )
        >>> response = client.avatars.chat(request)
    """

    avatar_id: str = Field(..., description="Avatar to chat with")
    avatar_context_only: bool = Field(
        default=False, description="Use only avatar context, no user history"
    )
    preserve_avatar_voice: bool = Field(
        default=True, description="Maintain avatar's speaking style"
    )

AvatarListResponse

Bases: BaseModel

Response model for listing avatars

Attributes:

Name Type Description
avatars List[Dict[str, Any]]

List of available avatars with their details

count int

Total number of avatars

Example

avatars = client.avatars.list() print(f"Found {avatars.count} avatars") for avatar in avatars.avatars: ... print(avatar['name'])

Source code in python/cosmicmind/models.py
class AvatarListResponse(BaseModel):
    """
    Response model for listing avatars

    Attributes:
        avatars: List of available avatars with their details
        count: Total number of avatars

    Example:
        >>> avatars = client.avatars.list()
        >>> print(f"Found {avatars.count} avatars")
        >>> for avatar in avatars.avatars:
        ...     print(avatar['name'])
    """

    avatars: List[Dict[str, Any]] = Field(..., description="List of available avatars")
    count: int = Field(..., description="Total number of avatars")

exceptions

CosmicMind SDK Exceptions

Custom exceptions for the CosmicMind Python SDK.

Classes

AuthenticationError

Bases: CosmicMindError

Raised when API key is invalid or missing

Source code in python/cosmicmind/exceptions.py
class AuthenticationError(CosmicMindError):
    """Raised when API key is invalid or missing"""
    pass

CosmicMindError

Bases: Exception

Base exception for all CosmicMind errors

Source code in python/cosmicmind/exceptions.py
class CosmicMindError(Exception):
    """Base exception for all CosmicMind errors"""
    pass

RateLimitError

Bases: CosmicMindError

Raised when rate limit is exceeded

Source code in python/cosmicmind/exceptions.py
class RateLimitError(CosmicMindError):
    """Raised when rate limit is exceeded"""
    pass

ServerError

Bases: CosmicMindError

Raised when server encounters an error

Source code in python/cosmicmind/exceptions.py
class ServerError(CosmicMindError):
    """Raised when server encounters an error"""
    pass

ServiceNotAvailableError

Bases: CosmicMindError

Raised when trying to access a service not included in your license

Source code in python/cosmicmind/exceptions.py
class ServiceNotAvailableError(CosmicMindError):
    """Raised when trying to access a service not included in your license"""
    pass

ValidationError

Bases: CosmicMindError

Raised when request data is invalid

Source code in python/cosmicmind/exceptions.py
class ValidationError(CosmicMindError):
    """Raised when request data is invalid"""
    pass