GitHub Copilot Agent, as of June 2025, looks much more capable than it did 2 months ago.
Abstract: After the appearance of the GitHub Copilot Agent, I decided to try it on my real-life ASP.NET8 project of 123.000 SLOC. I tried some limited-scope tasks, and the initial results are much better than my GitHub Copilot tests two months ago.
1. GitHub Copilot in VS2022
I am working on the development of .NET8/C#/ASP.NET8/EF8 application, which now has around 123.000 lines of code (SLOC), out of which 50.000 is the EF-database-first model, in Visual Studio 2022.
I have a subscription to GitHub Copilot Pro + license. So far, that AI tool has been good for limited-scope tasks. I wanted to try the new GitHub Copilot Agent mode. Below are notes from my regular work.
The environment is,
- Visual Studio 2022, 17.14.5
- GitHub Copilot (GHC). License Copilot Pro+
- Agent mode, GPT-4o, GPT-4.1, Claude 3.7 Sonnet
- C#/.NET8/ASP.NET/JavaScript/jQuery/HTML/CSS/SQL-Server
2. Anecdotal Experience with Real ASP.NET8 project
2.1. Impressions and observations
- GHC is not equally smart every time. Sometimes, for a very similar task, “sees” different logic in the original template and improvises. But GHC gets its idea, it makes sense in a way, “like a guy that does not understand project business logic well”, but it makes it work, even though no one needs that functionality really. But it builds it to a compiled state. It was not that good, GHC built and calculated its own and created the property “countOfLinkedAccounts” and it didn’t see that I am at another place calculating, and there is the property “numberOfAccounts”, and the calculation is more elaborate ( soft delete, etc.) than what it calculated.
- Small error in C#, there was inheritance of interfaces, and it added the same method 2 times to the interface, at different levels of the interface hierarchy. But it got it to compile and build.
- It is easier to manually move generated methods a bit to the proper files than to write a whole sentence to GTC. Generally, generated code needs to be moved/erased/ completed a bit manually.
- Sometimes GHC gets confused by the comments that include code. There was an old version of the method in comments, and it started to modify that one to make it active, and then modified the actual version of the same name to do something. Maybe the final result was good, but that is not how I want the work process, and also, I can not see/verify in the editor that all is fine. So, it is a bit of a limitation of us humans to review all the changes if changes are spread across too many places. I refused all that. I will delete comments that are confusing GHC and start all over from the beginning.
- It is kind of easy, if you can’t understand what GHC did, you refuse it all and make it work from the beginning. It does not complain at all, like your coworker would. You can be as harsh as you want.
- Sometimes, a simple request, just to move a method from one file to another, makes it work hard. I see GHC is re-reading the files and stuck, and even failed in the first build, and in the final solution, it renamed some random variables. Too much confusion for a simple request.
- I am using a separate text editor (Notepad++) to carefully prepare commands for GHC. I double-check check method and file names I am giving to it and reread the command to verify that it sounds reasonable. I stopped using # for file names, just use natural language and say “file abc.cs” or “class abc in file abc.cs” and it all works.
- I am not using any specific “prompt engineering” technique, just reasonably explain like I would to some other developer. I tell it, “Look at this method, which can be useful for this task”. The Internet is now full of “prompt engineering gurus” who are selling some “street smarts” on how you should outsmart AI tools, like “give me top 5% answers only,” etc. I assume AI is already doing its best effort, and I do not think I can pressure it to work harder. But of course, that is my free advice, if you want to pay $500 for self-proclaimed “prompt engineering guru” course to teach you how to “screw AI’s mind to work harder”, go do it. People tend to believe more in expensive doctors than in cheaper ones.
- So, I give a task to GHC, it needs 3-5 minutes to do it and build it, and in the meantime, I go check my email or read something on the Internet. When I see it is finished, I go and review what it did. That is a tricky part, and Git is my main friend, where I can see all the changes. If I do not like or do not understand it, because it made strange changes at too many places, I undo everything. And review my command and improve it with more details, referring to things I did not like. And give the GHC command to work from the beginning. Can we call this “a work process with GitHub Copilot Agent”? So, basically, it is a Generate-Review-Drop-Generate-Review-Accept cycle as a work process? Reviewing is a burden on humans and requires significant effort, because you are reading someone else’s code, but I just do not want to accept anything into my repository that I haven’t read and understood.
- Review requires understanding what GHC did and why. It is not easy. It took me time to see why in some model file it deleted some property, but after a review, GHC was right, that property is dead code and never referenced in that particular component. I just copied from the original component that needs it. So, GHC is smarter than I sometimes, but I still need to verify it. Only a machine can track each and every one of the thousands of variables in my project and see if it has become obsolete.
- Sometimes you tell GHC to clone a component and modify it, and GHC omits a VERY IMPORTANT parameter in JavaScript I am passing, and the component would NOT work without it. It seems that if it can not figure, it omits it. And it was told to just clone the code. Human review of generated code can only detect it. If GHC really looked into the AJAX method on the server side, it would see that the parameter is used, but it didn’t fully understand the code. It was readable from the code, but it made a logical error.
- That is the problem I was talking about, if I accept code without a VERY CAREFUL REVIEW, I will later need to go back and clear bugs. So, that is why I GO SLOW AND REVIEW every line of generated code. Slow in review, but altogether saves time later because of fewer defects.
![Failed]()
- GHC sometimes needed to make the project build 4 times. So, the power of your local resources counts. That is why I have a 16-core processor with 32GB RAM. It is kind of fancy distributed computing, build work (part of work) is done locally on your machine, and AI smarts (part of work) is done in some supercomputer in California.
- I see again “hallucinated properties”. It finished work on the .cshtml file, but it did NOT build it. There were added some multiannual strings were added, but not from the Resources files. I asked to add them to the Resources files and build the project. It said it can’t, I need to add it by myself. OK, I will do that. I added 2 strings to the Resource file, and it compiles now.
![Validated]()
![Rebuild]()
3. Form created using GitHub Copilot
Here is the result of the work.
![GitHub Copilot]()
- It took maybe 15 command prompts ( some were research ) to create the form
- It was not that easy. I divided the task into several subtasks, to create methods by method, so I can fully have control of creation and verify the quality of each generated method. Reviewing code is serious work, in my opinion.
- It was pretty smart with the .cshtml file, it found proper icons, even different icons are for singular and plural Account and Contract. Looks like it was reading files of the project to find the proper icons.
4. Conclusion
This was still a task carefully chosen for GHC, in the sense that it is template-based. It only needed to clone the code and adapt it from the Account entity to the Contract entity.
The burden is still on the human programmer to review all the generated code and check for errors. In this version of GHC, syntax errors are very rare, but there were a few logical errors.