diff --git a/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md
new file mode 100644
index 0000000..2b33d86
--- /dev/null
+++ b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md
@@ -0,0 +1,2 @@
+
DeepSeek open-sourced DeepSeek-R1, an LLM fine-tuned with reinforcement learning (RL) to enhance thinking capability. DeepSeek-R1 attains [outcomes](https://gigen.net) on par with [OpenAI's](https://git.karma-riuk.com) o1 design on a number of standards, consisting of MATH-500 and [SWE-bench](https://drshirvany.ir).
+
DeepSeek-R1 is based upon DeepSeek-V3, a mixture of experts (MoE) design just recently open-sourced by DeepSeek. This base model is fine-tuned using Group Relative Policy Optimization (GRPO), a [reasoning-oriented](https://77.248.49.223000) version of RL. The research study group also carried out [knowledge distillation](https://twentyfiveseven.co.uk) from DeepSeek-R1 to open-source Qwen and Llama designs and launched numerous variations of each
\ No newline at end of file