Putting the DeepSeek R1 Model to the Test with Coding Challenges
I’ve decided to test DeepSeek’s R1 model while brushing up on my algorithms and data structures. One thing I particularly enjoy about this model, compared to OpenAI’s o1, is the transparency in its “thinking” process. DeepSeek R1 displays its reasoning in real time, allowing users to follow the chain of thought and observe how it accounts for details, edge cases, and potential pitfalls along the way. I found it both amusing and endearing to read the multitude of “But wait…” and “Hmm…” interjections – it felt like watching a student tackle a competition problem under time pressure!
Snippet from the model’s thinking process:
Wait, in JavaScript, reading input can be a bit tricky. Since it’s for a coding problem, maybe the user expects to read from stdin, perhaps using readline or splitting the input into lines. But in practice, for coding challenges, the input might be read line by line.
Hmm. Let’s assume that the input is provided as a single string, split into lines. For example, in HackerRank, you can read input using ‘readline()’ in Node.js.
So, the plan is:
1. Read the first line to get m and n.
2. Read the magazine words from the second line.
3. Read the note words from the third line.
The model is surprisingly capable when it comes to solving coding challenges. For example, for the Ransom Notes problem on HackerRank (easy difficulty), the model was able to come up with a great solution:
function checkMagazine(magazine, note) { const magFrequency = createFrequencyMap(magazine); const noteFrequency = createFrequencyMap(note); for (const word in noteFrequency) { if (!(word in magFrequency) || magFrequency[word] < noteFrequency[word]) { console.log("No"); return; } } console.log("Yes"); } function createFrequencyMap(words) { const frequency = {}; for (const word of words) { frequency[word] = (frequency[word] || 0) + 1; } return frequency; }
I then asked DeepSeek R1 to compare its solution to mine:
function checkMagazine(magazine: string[], note: string[]): void { const wordsToUse = new Map() for(const word of note) { wordsToUse.set(word, (wordsToUse.get(word) ?? 0) + 1) } for(const word of magazine) { if(wordsToUse.has(word)) { wordsToUse.set(word, wordsToUse.get(word) - 1) if(wordsToUse.get(word) === 0) { wordsToUse.delete(word) } } if(wordsToUse.size === 0) { console.log('Yes') return } } console.log('No') }
The model thought for 16 seconds and agreed that my solution is better due to better space complexity:
Comparison
Time Complexity
- Your Solution: O(M + N), where
M
is the number of words inmagazine
, andN
is the number of words innote
.
- You iterate through
note
once (O(N)
) andmagazine
once (O(M)
).- Original Solution: Also O(M + N).
- Both solutions have the same time complexity.
Space Complexity
- Your Solution: O(N) (worst case, if all words in
note
are unique).
- You only track the frequency of words in
note
using aMap
.- Original Solution: O(M + N) (worst case).
- It builds two frequency maps: one for
magazine
(O(M)
) and one fornote
(O(N)
).Your space complexity is better because you only track the required words from
note
, not the entiremagazine
.
Key Differences
- Early Exit:
- Your solution checks
wordsToUse.size === 0
during iteration overmagazine
. If the magazine contains all required words early, it exits immediately. This can save time in some cases.- The original solution only checks after building both frequency maps.
- Handling Unused Words:
- Your solution ignores words in
magazine
that aren’t needed fornote
.- The original solution stores all words in
magazine
, even those irrelevant to thenote
.- Edge Cases:
- Your solution correctly handles words in
note
that are absent inmagazine
(they remain inwordsToUse
and cause a “No”).- Both solutions are logically equivalent in terms of correctness.
Which is Better?
- Space: Your solution is better (O(N) vs. O(M + N)).
- Time: Both are equivalent (O(M + N)).
- Practical Efficiency: Your solution may exit early if the
note
is satisfied before iterating the entiremagazine
, which is an advantage.Your solution is more space-efficient, while maintaining the same time complexity. It’s an improvement over the original!
Flattery aside, I then challenged the model to devise a solution surpassing both my approaches and its previous attempts in terms of time and space complexity. After rigorously analyzing the theoretical limits of both metrics, it concluded:
Final Verdict
Your solution is already optimal for both time and space complexity. No algorithm can do better asymptotically, and your approach has practical advantages (early exit, minimal space).
I then asked the model for some help with the Array Manipulation problem (hard difficulty). I remember solving a similar problem a while ago, but couldn’t quite know where to start. The model provided the following solution:
function arrayManipulation(n, queries) { // Initialize a difference array with n+2 elements to handle 1-based indexing and b+1 const diff = new Array(n + 2).fill(0); for (const [a, b, k] of queries) { diff[a] += k; diff[b + 1] -= k; } let max = 0; let current = 0; for (let i = 1; i <= n; i++) { current += diff[i]; if (current > max) { max = current; } } return max; }
Ah, of course – Prefix Sum! But I did notice that the solution provided was still using the “diff” array, that could be potentially quite large. I then asked the model:
Is it possible to avoid using a diff array with n elements? Can the information from the queries array only be used to find the answer?
And the model returned an improved solution that processes the query ranges directly using a sweep line algorithm:
function arrayManipulation(n, queries) { const events = []; for (const [a, b, k] of queries) { events.push([a, k]); // Start event at position a: add k events.push([b + 1, -k]); // End event at position b+1: subtract k } // Sort events by position. For the same position, process end events first (negative deltas). events.sort(([pos1, delta1], [pos2, delta2]) => pos1 !== pos2 ? pos1 - pos2 : delta1 - delta2 ); let max = 0; let current = 0; for (const [pos, delta] of events) { current += delta; // Update running total if (current > max) { max = current; // Track maximum value } } return max; }
I wouldn’t give the model too much credit for genuinely generating an optimized solution on its own – after all, it taps into knowledge parsed from countless online sources, potentially reproducing answers verbatim from its training data. However, when observing its reasoning process and final output, it’s clear there’s a degree of emergent intelligence at play. A particularly powerful feature is the ability to interact dynamically with the model: asking follow-up questions, probing nuances or ambiguities, and observing how its responses often provide meaningful clarity – even when addressing complex or abstract topics.
After this brief test-drive, I’m struck by how impressive this powerful model is, not just by its performance, but also by its efficiency. I was equally surprised to learn it was trained at a fraction of the cost of OpenAI’s o1, is completely free to use with no rate limits, and still delivers remarkable results. It’s inspiring to see competition heating up in this space, driving innovations that are both groundbreaking and accessible to all. Hats off to the team behind this achievement!