Tanishq Singla

Jan 29, 2025 • 3 min read

How I use deepseek r-1 model as coding assistant

How I use deepseek r-1 model as coding assistant

Introduction

Deepseek recently announced their new open source model deepseek r-1. Here's a short explainer video by Fireship.

https://youtu.be/-2k1rcRzsLA?si=3QXq2V4v3JvVtzc4

Why this model?

Firstly, deepseek r-1 is open source which for me meant that I can run this model on my machine and do all sorts of stuff with the LLM.

Second, the model is cheap in terms of compute. It can be run on various machines and the chain of thought gives surprisingly good results though your mileage may vary. Moreover, you can fine-tune this model to suit your needs provided you have enough compute and data to do so.

Setting up as a coding assistant

TL;DR

If you want to save your time, here's a quick rundown

  1. Pull deepseek r-1 from ollama

  2. Serve deepseek r-1 from ollama

  3. Install vscode extension 'continue'

  4. Detect the model in 'continue' and use

If I saved you some time, I think it deserves a praise. After all many people copy-pastes their AI ghostwritten blogs to increase read time, so a like will be highly appreciated.

The longer version

For folks who are not accustomed with the tools, don't worry I got you covered.

Installing Ollama

We'll use ollama to install and run our model. So go to ollama's page and download it based on your operating system.

Download Ollama

Visit the deepseek r-1 model page on ollama website, as of writing the blog the link is https://ollama.com/library/deepseek-r1:7b

Deepseek r-1 comes in multiple variants, the list aforementioned categorizes the models.

Pick the one that suits you best, I can run the 7b model which also happens to be the default model in ollama.

After picking the right model, copy the command mentioned on the ollama page by clicking on the copy button.

Run this command in your terminal (powershell/command prompt for windows user)

This command will automatically install and run the model.

Installing Continue Extension

In VSCode install the Continue extension.

After it is installed, you'll see the continue icon on the left pane of VSCode.

Now that the extension is installed, click on the extension icon, you'll see an option to add chat model.

Choose ollama in the "Provider" section and "Autodetect" in the model.

This will open and edit the config file like this

Now should should see your chat model in the chat box of continue AI.

Setup Complete

That's it this was the setup, now you can use the newly installed deepseek model as your personal coding assistant.
The best part it will work offline too, as it runs on your machine.

Conclusion

Deepseek's r1 is different, what I like about it is that you can literally look at all the chain of thought live it makes before outputting the final result and duly improve the prompt if you feel some assumption is wrong.

Well the model is not too perfect and I do have some fair share of pain points with it. That said I am also running a very small version of the model compared, deepseek r-1 model is 400x larger than that the one I am using.

That said, having the ability to run such a model locally is a breakthrough in itself and it's only going to get better.

Join Tanishq on Peerlist!

Join amazing folks like Tanishq and thousands of other people in tech.

Create Profile

Join with Tanishq’s personal invite link.

4

12

0