From 55dbc2a0457047344883a75d527e7dacba11f67d Mon Sep 17 00:00:00 2001 From: letaaddis87647 Date: Sat, 12 Apr 2025 19:48:17 +0000 Subject: [PATCH] Add 'DeepSeek Open-Sources DeepSeek-R1 LLM with Performance Comparable To OpenAI's O1 Model' --- ...R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md | 2 ++ 1 file changed, 2 insertions(+) create mode 100644 DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md diff --git a/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md new file mode 100644 index 0000000..b36ec26 --- /dev/null +++ b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md @@ -0,0 +1,2 @@ +
[DeepSeek open-sourced](https://watch.bybitnw.com) DeepSeek-R1, an LLM fine-tuned with support learning (RL) to improve thinking ability. DeepSeek-R1 attains results on par with OpenAI's o1 design on numerous criteria, [consisting](https://tube.zonaindonesia.com) of MATH-500 and [SWE-bench](https://git.soy.dog).
+
DeepSeek-R1 is based upon DeepSeek-V3, a mixture of [experts](http://git.rabbittec.com) (MoE) design recently [open-sourced](https://abcdsuppermarket.com) by DeepSeek. This base model is fine-tuned using Group Relative Policy Optimization (GRPO), a [reasoning-oriented variation](https://www.referall.us) of RL. The research study group likewise performed understanding distillation from DeepSeek-R1 to [open-source Qwen](http://106.52.134.223000) and Llama models and released several versions of each \ No newline at end of file