In a continuation from what I wrote last week, I want to talk a little more about my experience working with AI coding tools. Over the last week or so I have been exploring adding various AI tools to my workflow. I have mixed feelings about their use, but it is hard to deny their effectiveness.
On one hand, an LLM is better at coding than most people (myself included) can hope to be. On the other hand, LLMs often write pretty terrible code. I’m sure that AI will only get better at almost everything we ask of it, and I think it will also rapidly change what the job of a software engineer looks like.
In my short foray into working with commonly known AI coding tools, I can’t help but noticing that the AI makes a lot choices that are questionable at best, and confounding at worst. I will say, that so far the various LLMs I have tried were able to solve almost everything I asked of them, but they made some weird choices in doing so.
For one example, here is A piece of code that ChatGPT generated. It is supposed to look ahead in a stream of numbers, and print any group of 3 consecutive numbers.
for (int i = 0; i < n;) {
int start = i;
while (i + 1 < n && nums[i + 1] == nums[i] + 1) {
i++;
}
if (i - start >= 2) {
std::cout << nums[start] << "-" << nums[i];
} else {
for (int j = start; j <= i; ++j) {
std::cout << nums[j];
if (j < i) std::cout << " ";
}
}
i++;
if (i < n) std::cout << " ";
}
I literally never seen something like this, and it is the only AI out of 4 that gave me something this ridiculous. Or maybe it is G E N I O U S. I really couldn’t tell, but I’m going to lean towards it being pretty bad.
I think that the use (or overuse) of AI tools for coding will change a lot of how a lot of jobs are done. From the perspective of a software engineer, I think that there will be a lot more work done reading, testing, and debugging code written by AI. If my recent experience is anything to go by, they will need more engineers to do all this work, not less.