$title =

CST 370 – Week 2

;

$content = [

It is hard to deny the effectiveness of things like LLMs when writing code, especially when companies like Google are using AI for over 25% of their code in production. In an effort to see how I can work AI coding tools into my workflow, I decided to compare various AI coding tools while working on my assignments this week.

There were two assignments to complete, both of which involved grouping numbers that are fed into the program.

For the first program, I didn’t actually intend to use any AI tools. The prompt (that we were given) was to find the minimum distance between any two numbers out of a list. In the case of multiple pairs with the same minimum distance, all pairs needed to be printed. After fleshing out the basic logic of the program, I started putting a few things together.

Like most people do these days I have co-pilot in VS Code, but I’ve never really used it. For whatever reason, I decided to click on the little co-pilot icon, and I was impressed with what it could do.

Besides a method to grab input numbers and store them in a vector, I didn’t have much written. The thing that impressed me, and what prompted me to compare AI tools this way, was how co-pilot was able to interpret what I was trying to do, without me even asking it.

All I can tell is that the variable names “pairs”, and “minDistance”, must have been enough context for the AI to know what I would want from it. Once I started writing a for-loop, co-pilot’s autofill feature completed the logic for comparing the distance between numbers and adding them to the vector. It even handled the cases when a new minimum was found, and the vector needed to be cleared.

I was a little surprised, but after taking a second to check it I realized it was exactly what I needed. At this point I was curious to see how different LLMs would handle the next part of our assignment.

The second problem we were given was to take another input of numbers, and compress it, so that any sequential numbers were printed “low”-“high”. We were given a similar prompt to the first problem, as well as some sample inputs, and their expected outputs(will be relevant later).

The LLMs that I chose to use were Gemini (Google) Chat GPT(Open AI) DeepSeek, and Claude(Anthropic). Each was given an identical prompt, as well as a sample input and the expected output. For the sake of brevity, I won’t post the prompt or the code that was generated. Instead I want to focus on a couple of things that I found interesting.

For starters, only Deepseek was able to correctly solve the problem with the code that was generated. Each tool technically succeeded, however due to an inconsistency between the prompt and the sample run provided, GPT and Claude did not correctly group numbers. Instead of grouping any consecutive numbers, they both only grouped sets of three consecutive numbers. Additionally, Deepseek’s solution was the only one that accepted numbers and printed the output correctly.

There is a lot more that I want to get into with the solution generated by each tool, but it is honestly a lot to go over. I may do a larger writeup comparing each solution, but for multiple reasons it is clear that Deepseek did the best job.

];

$date =

;

$category =

;

$author =

;

$previous =

;