Brie

Model · November 2025

A family of domain-specific language models fine-tuned for continental philosophy, speculative reasoning, and creative writing. Trained using human-curated data authoring — a methodology where training data is authored through iterative discussions with LLMs rather than scraped or synthetically generated.

What Brie does

Brie excels at the kind of thinking that sits between rigorous analysis and creative exploration — continental philosophy, phenomenology, existentialism, critical theory, and literary analysis. It engages with ideas rather than just retrieving information.

The models are fine-tuned on 1,213 carefully curated examples developed over years of philosophical and creative discussions. This small but high-quality dataset produces models that achieve 77-91% win rates against their base models in blind evaluations.

Available models

Brie Qwen 2.5 3B

Highest specialization — 91.2% in-domain win rate

Best for deep philosophical engagement. Aggressive specialization while maintaining competence on general tasks.

HuggingFace →

Brie Llama 3.2 3B

Best general preservation — 80.4% in-domain win rate

Strong domain performance with best preservation of general capabilities. Ideal when maintaining broad competence matters.

HuggingFace →

Brie Qwen 2.5 0.5B

Small model viability — 72% comprehensive win rate

Runs on consumer hardware (M4 MacBook). Demonstrates effective domain adaptation at minimal compute cost.

HuggingFace →

The research behind it

Brie was created using a novel methodology called human-curated data authoring. Instead of collecting massive datasets or using synthetic data generation, training examples were authored through iterative discussions with LLMs — positioning them as authoring tools rather than autonomous generators.

The key finding: quality and curation matter more than scale. 1,213 carefully authored examples outperformed what typically requires 10,000+ examples.

Read the full research →

Links