CUIDADO Quickstart
Setup Instructions
- Install Ollama for your platform
- Run
ollama pull gemma3:4b-instruct-q4 - Clone the repository:
git clone https://github.com/pagihall/cuidado - Install dependencies:
pnpm install - Start the development server:
pnpm dev
Usage
Once running, visit http://localhost:3000/cuidado/chat to use your local assistant.
The assistant will use your local Ollama model and maintain conversation context through behavioral memory.
Configuration
CUIDADO can be configured through environment variables and YAML policy files:
MODEL_PRIMARY- Primary Ollama model nameEMBED_MODEL- Embedding model for retrievalTEMP- Temperature for model responsessrc/policy/persona.yaml- AI personality and behavior
Features
- Behavioral Memory: Learns from interaction patterns and feedback
- Local Retrieval: Uses embeddings to find relevant context
- Safety Systems: Built-in content filtering and ethical guidelines
- Streaming Responses: Real-time token streaming for responsive UX
- Constitutional AI: Governed by editable policy files