GitHub Copilot and others: Test of five Code Assistants for C#

Marek Sirkovský
11 min readApr 7, 2024

--

Is coding dead? The message from NVIDIA CEO Jensen Huang was crystal clear: “AI will take over coding”. But is it really so? I tested five popular AI assistants for C# coding to see which one could potentially replace me.

Empty office — Igor Omilaev on Unsplash

After Huang’s controversial statement, NVIDIA announced that they’re preparing their own AI assistant for coding, which is another effect of the Cambrian explosion in the AI coding assistant space. The Cambrian Explosion, around 541 million years ago, was a brief period in Earth’s history marked by a rapid increase in the diversity of life. A similar situation occurred with the AI coding assistant, possibly initiated by the popularity of ChatGPT. Some of you may remember the Cambrian explosion of JS frameworks after 2010.

These days, you have three main ways to consume AI suggestions in C# coding:

  • Using ChatGPT or its competitors through chat on their respective websites.
  • Code completion
  • To ask an AI-powered assistant through chat.

Let’s ignore the first option since you need to leave your IDE. The second option, Code completion, is primarily designed to enhance the flow of coding. You start writing code, and the AI generates suggestions to assist you. Both options are well-known and popular, but I’d like to focus on the third option — the assistants’ chat functionality. It seems more deliberate and less adopted.

What do I test?

I decided to test code assistants in more real-world use cases as I’ve watched too many tutorials demonstrating AI assistants’ capabilities through refactoring a factorial function. We don’t write factorial functions every day, do we?

I tested out the assistants on a standard .NET application, specifically, the delete handler, which will be discussed later. I didn’t include more exotic operations like code translation or explanation since they are not part of our daily assignments. Although these features could be useful for learning a new language, I am not currently focusing on it.

I’ve created a simple application with a minimal API, Entity Framework Core, and Mediatr library. I basically test only the handler, which looks like this:

// Entity
public class User
{
public required int User_Id { get; set; }
public required string Name { get; set; }
public required string Email { get; set; }
public required bool Deleted { get; set; }
public required bool Locked { get; set; }
}

public class MyDbContext(DbContextOptions<MyDbContext> options)
: DbContext(options)
{
public DbSet<User> Users { get; set; }
}


//DeleteUserCommandInput.cs
public record DeleteUserCommandInput(int UserId) : IRequest;

public class DeleteUserCommandHandler(MyDbContext context) : IRequestHandler<DeleteUserCommandInput>
{
public async Task Handle(DeleteUserCommandInput request, CancellationToken cancellationToken)
{
var user = context.Users.Find(request.UserId);
if(user != null)
{
if (user.Locked == false)
{
context.Users.Remove(user);
await context.SaveChangesAsync();
}
}
}
}

I deliberately added a few issues into the code to see how good suggestions AI assistants can provide.

I want to test five tasks — four mundane tasks that developers often deal with daily and one task related to package management. The test cases:

  • Help with installing EF core into the solution
  • Adding a new code that respects the existing codebase
  • Generating comments
  • Improving the code
  • Generating unit tests

AI assistants that qualify for the test

It was unexpectedly difficult to choose capable AI assistants for the test. I found many AI assistants for C#, but many were just ChatGPT in disguise. Plus, I want to test only assistants available for Visual Studio or JetBrains Rider. I know you can use VS Code to write C#, but I’ve never seen anyone do it. It seems it’s always Visual Studio or JetBrains.

I found more than 30 code assistants for Visual Studio or JetBrains, including many “ChatGPT-in-VS” tools. After researching and testing, I selected these five:

GitHub Copilot

Copilot is the most famous code assistant from Microsoft trained on GitHub repositories. It supports Visual Studio and JetBrains Rider.

JetBrains AI assistant

I love JetBrains products but was a bit nervous after reading negative comments about their new AI assistant. This particular one is really scary:

However, I decided to give it a try. JetBrains AI assistant supports Visual Studio (via the Resharper plugin) and Rider.

CodeWhisperer from Amazon

The new kid on the block is called CodeWhisperer and it is still in beta. At the moment, CodeWhisperer exclusively supports JetBrains Rider.

Tabnine

I think Tabnine is one of the older(oldest?) code assistants, having been founded in 2018. It supports Visual Studio and JetBrains Rider.

CodiumAI

According to Codium documentation, C# functionality is limited for now, but I heard good things about Codium. It supports only JetBrains Rider (and VS Code).

Have you noticed that AI assistant logos often use purple and blue? Let’s compare their features and see if they are similar, just like their logos.

Help with installing EF core into the solution

The first thing I want to test is how precise the instructions regarding managing NuGet packages are.

The prompt: How to add an Entity Framework Core to the solution?

JetBrains AI

The outcome was really good. The instructions provided were clear and relevant to the Rider, but at the same time, they were generic. It didn’t reflect new changes in .NET 8 or the fact that I was using MinimalAPI.

Codium and Tabnine provided less clear instructions. They gave instructions for Visual Studio instead of Rider and didn’t acknowledge my use of .NET 8 and MinimalAPI. Furthermore, no information was provided regarding the registration of Entity Framework Core in the program.cs.

GitHub Copilot gave instructions for Visual Studio, and it was quite similar to Tabnine.

Codewhisperer showed me a solution for Visual Studio without any code. On the other hand, It showed me the sources from which it extracts the information.

👑 Winner: JetBrains, as it provides the most precise and context-aware details.

Adding a new code that respects the existing codebase

In this test, I asked the AI to write a code fetching data from the EF context. To test AI assistants’ ability to identify names from the existing codebase, I used a nonstandard (non-conventional) name, User_Id, in the User class.

Code:

public class User
{
public required int User_Id { get; set; }
public required bool Locked { get; set; }
...
}


public async Task Handle(DeleteUserCommandInput request,
CancellationToken cancellationToken)
{
// TODO add fetching
if(user != null)
{
context.Users.Remove(user);
await context.SaveChangesAsync(cancellationToken);
}
}

The prompt: fetch a user from the database using the ID column as the identifier

JetBrains AI inserted the code in the correct location but with an incorrect property name. It used UserId instead of User_Id. The advantage of using JetBrains is the seamless integration it offers with Rider. The generated code is displayed as a diff, just like in the image:

But when I hit “Accept all”, the rest of the handler was deleted! Extremely unfriendly. The good news is that the developers of Rider say they are aware of this issue and are working on a fix.

When I asked GitHub Copilot a question, they gave me a generic response. I had to reference the DeleteUserCommandHandler file to get a sort of correct answer. However, like JetBrains AI, Copilot also generates the incorrect property name.

public async Task Handle(DeleteUserCommandInput request, CancellationToken cancellationToken)
{
var user = await context.Users
.FirstOrDefaultAsync(u => u.UserId == request.UserId, cancellationToken);
...
}

Codium, Codewhisperer, and Tabnine produce answers similar to those of JetBrains AI and GitHub Copilot.

However, in Codium, I needed to change the Focus on input to the handle method in order to see the most precise answer. Despite this change, the property name was still incorrect.

👑 Winner: No one, as none of them, could pick the correct name.

Generating comments

In this use case, I ask the assistants to generate a comment for DeleteUserCommandHandler.

The prompt: generate a comment

The code:

public async Task Handle(DeleteUserCommandInput request, 
CancellationToken cancellationToken)
{
var user = await context.Users.FindAsync(request.UserId, cancellationToken);
if (user?.Locked == false)
{
context.Users.Remove(user);
await context.SaveChangesAsync(cancellationToken);
}
}

JetBrains AI generated a generic comment without critical information that you can delete only a locked user.

/// <summary>
/// Handles the DeleteUserCommand by deleting a user from the database.
/// </summary>
/// <param name="request">The DeleteUserCommandInput object containing the user ID to be deleted.</param>
/// <param name="cancellationToken">The cancellation token.</param>
/// <returns>A task representing the asynchronous operation.</returns>

Tabnine and GitHub Copilot generated the same boring comments. All three were probably tired after the testing.

Oh, I should have mentioned that I anthropomorphize computer things regularly. You won’t believe how skilled I am at verbal communication with the C# compiler or my internet router.

Whisperer nearly clinched victory. Whisperer’s comment reflected that you could remove the user only when the user is locked.

/// <summary> 
/// Handles the deletion request by finding the user by ID
/// and removing them from the database if unlocked.
/// </summary>

However, Whisperer provides the code in the following unformatted way:

👑 The winner is Codium, as it produced a quite lengthy comment that was also the most accurate. Codium’s comment indicates that the user must be unlocked before being deleted.

/// <summary>
/// Asynchronously handles the deletion of a user from the database.
/// </summary>
/// <param name="request">The command input containing the ID of the user to be deleted.</param>
/// <param name="cancellationToken">A token to observe while waiting for the task to complete.</param>
/// <remarks>
/// This method first attempts to find a user in the database matching the provided user ID from the request.
/// If a user is found and they are not locked, the user is removed from the database.
/// This operation is asynchronous and supports cancellation.
/// </remarks>
/// <returns>A Task representing the asynchronous operation.</returns>

Improving the code

I asked AI to improve the following method:

public async Task Handle(DeleteUserCommandInput request, 
CancellationToken cancellationToken)
{
var user = context.Users.Find(request.UserId);
if(user != null)
{ if (user.Locked == false)
{
context.Users.Remove(user);
await context.SaveChangesAsync();
}
}
}

The main issues with the code are:

  • It’s unformatted
  • Two ifs can be combined into one
  • the Find method is a thread-blocking method.

The prompt: Improve the code

JetBrains AI

The result was:

It’s not bad. However, it’s better to use FindAsync with a cancellation token instead of Find. Additionally, modern C# provides a more elegant way to express the condition:

if (user?.Locked == false)
// or
if (user is { Locked: false })

Tabnine provided three suggestions. This first one, “Extract the code that deletes the user into a separate method,” is fine, but it seems like a bit of overengineering.

The second one, Adding null checks to ensure that the user is not null before deleting it, is basically useless as I already have this check in place.

The third one, using AsNoTracking, is almost premature optimization.

GitHub Copilot

The suggestions are almost correct. There is only one issue: the code can’t be compiled due to the wrong property name.

Whisperer

Whisperer provided basic guidelines that don’t apply there, along with questionable recommendations such as using AutoMapper.

Codium

The suggestions provided cover all the issues, and also include a few improvements such as adding exception handling and logging to the code.

👑The winner is Codium, as their suggestions are the most structured and precise.

Generating unit tests

The last test is to generate unit tests for DeleteUserCommandHandler.

The prompt: Generate unit tests for DeleteUserCommandHandler

The tests generated by JetBrains AI are correct and have been properly mocked using the Moq library. It generated two tests:

  • Handle_ShouldRemoveUnlockedUser
  • Handle_ShouldNotRemoveLockedUser

The sample:

[Fact]
public async Task Handle_ShouldRemoveUnlockedUser()
{
var commandInput = new DeleteUserCommandInput(1);
await _handler.Handle(commandInput, CancellationToken.None);
_mockSet.Verify(m => m.Remove(It.Is<User>(u=>u.User_Id == 1)), Times.AtLeastOnce());
_mockContext.Verify(m => m.SaveChangesAsync(CancellationToken.None), Times.Once());
}

The tests are sort of useful, but JetBrains wasn’t the best.

Tabnine created these tests:

  • Handle_GivenInvalidRequest_ShouldNotDeleteUser
  • Handle_GivenValidRequest_ShouldDeleteUser

Unfortunately, the generated code couldn’t be compiled because there is an unknown type called AssistantsAIContext.


public DeleteUserCommandHandlerTests()
{
var options = new DbContextOptionsBuilder<AssistantsAIContext>()
.UseInMemoryDatabase(Guid.NewGuid().ToString())
.Options;

context = new AssistantsAIContext(options);
}

[Fact]
public async Task Handle_GivenValidRequest_ShouldDeleteUser()
{
// Arrange
var userId = 1;
var request = new DeleteUserCommandInput(userId);
var user = new User { User_Id = userId, Locked = false };

context.Users.Add(user);
await context.SaveChangesAsync();

// Act
await _handler.Handle(request, CancellationToken.None);

// Assert
var deletedUser = await context.Users.FindAsync(userId);
deletedUser.Should().BeNull();
}

GitHub Copilot only created one test, possibly due to its fatigue or overconfidence in my code.

  • Handle_UserExistsAndNotLocked_UserRemoved

After I asked GitHub Copilot to create three unit tests, it added these three tests:

  • Handle_UserExistsAndNotLocked_UserRemoved
  • Handle_UserExistsAndLocked_UserNotRemoved
  • Handle_UserDoesNotExist_NoExceptionThrown

Sample:

[Fact]
public async Task Handle_UserExistsAndNotLocked_UserRemoved()
{
// Arrange
var options = new DbContextOptionsBuilder<MyDbContext>()
.UseInMemoryDatabase(databaseName: "TestDatabase1")
.Options;

var user = new User { UserId = 1, Locked = false };

using (var context = new MyDbContext(options))
{
context.Users.Add(user);
await context.SaveChangesAsync();
}

var request = new DeleteUserCommandInput(1);
var handler = new DeleteUserCommandHandler(new MyDbContext(options));

// Act
await handler.Handle(request, CancellationToken.None);

// Assert
using (var context = new MyDbContext(options))
{
Assert.Null(await context.Users.FindAsync(1));
}
}

The code is fine. I don’t like setting up a database for every test, but you can probably instruct Copilot to move this set up to the constructor.

Whisperer was the lazy one. It shows me three snippets without any more information — really useless. The snipper looked like this, and the method CreateUnlockedUser was missing:

[Test]
public async Task Handle_RemovesUserIfUnlocked()
{
// Arrange
var user = CreateUnlockedUser();

// Act
await handler.Handle(new {UserId = user.Id});

// Assert
context.Users.Verify(u => u.Remove(user));

Codium really shines in generating tests. It even provides you with the dialog to set up your test environment. Codium’s tests covered the same use cases as Copilot and a few more. There were also several obscure, like the following:

Handle concurrent requests to delete the same user or Handle invalid input (e.g. negative user ID).

You could imagine that these obscure tests might be helpful in discovering edge cases the developer wouldn’t even think about.

Note: I am aware that I can coerce the other assistants to create additional unit tests, but I’d prefer using Codium UI for a better developer experience.

👑 The winner is Codium. It surpasses the other assistants by far.

Summary

That’s all. When I began writing this blog post, I believed that GitHub Copilot would come out on top. But it turned out to be false. The overall winner is Codium.

Please keep in mind that I only evaluated the assistants’ chat features. Privacy and security are also important, and they deserve a separate blog post.

And how much should we worry about our future?

AI assistants can perform really well and are constantly improving — I always get an update for one of these five plugins every two days. However, I strongly believe we still have a LOT of coding work ahead of us.

--

--

Marek Sirkovský
Marek Sirkovský

Responses (3)