DeepSeek Coder comprises a series of code language models trained from scratch on both 87% code and 13% natural language in English and Chinese, with each model pre-trained on 2T tokens. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on repo-level code corpus by employing a window size of 16K and a extra fill-in-the-blank task, resulting in foundational models (DeepSeek-Coder-Base). We further fine-tune the base model with 2B tokens of instruction data to get instruction-tuned models, namedly DeepSeek-Coder-Instruct.
- Pretrained on 2 Trillion tokens over more than 80 programming languages.
- Various model sizes (1.3B, 5.7B, 6.7B and 33B) to support different requirements.
- A window size of 16K window size, supporting project-level code completion and infilling.
- State-of-the-Art performance among open code models.
- Open source and free for research and commercial use.