DeepSeek: R1 Distill Qwen 1.5B

deepseek/deepseek-r1-distill-qwen-1.5b

Created Jan 31, 2025131,072 context
$0.18/M input tokens$0.18/M output tokens

DeepSeek R1 Distill Qwen 1.5B is a distilled large language model based on Qwen 2.5 Math 1.5B, using outputs from DeepSeek R1. It's a very small and efficient model which outperforms GPT 4o 0513 on Math Benchmarks.

Other benchmark results include:

  • AIME 2024 pass@1: 28.9
  • AIME 2024 cons@64: 52.7
  • MATH-500 pass@1: 83.9

The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.

Uptime stats for R1 Distill Qwen 1.5B

Uptime stats for R1 Distill Qwen 1.5B across all providers

    DeepSeek: R1 Distill Qwen 1.5B – Uptime and Availability | OpenRouter