CUIDADO Quickstart

Setup Instructions

  1. Install Ollama for your platform
  2. Run ollama pull gemma3:4b-instruct-q4
  3. Clone the repository: git clone https://github.com/pagihall/cuidado
  4. Install dependencies: pnpm install
  5. Start the development server: pnpm dev

Usage

Once running, visit http://localhost:3000/cuidado/chat to use your local assistant.

The assistant will use your local Ollama model and maintain conversation context through behavioral memory.

Configuration

CUIDADO can be configured through environment variables and YAML policy files:

  • MODEL_PRIMARY - Primary Ollama model name
  • EMBED_MODEL - Embedding model for retrieval
  • TEMP - Temperature for model responses
  • src/policy/persona.yaml - AI personality and behavior

Features

  • Behavioral Memory: Learns from interaction patterns and feedback
  • Local Retrieval: Uses embeddings to find relevant context
  • Safety Systems: Built-in content filtering and ethical guidelines
  • Streaming Responses: Real-time token streaming for responsive UX
  • Constitutional AI: Governed by editable policy files