AI programming is creepy!
I recently got accepted into GitHub's Copilot program. This is an AI driven pair programming experiment. Basically it is an extension to your favorite IDE that tries to guess what you are doing and provide code samples.
On paper, it sounds like Skynet, but in reality it is just a nural network trained on billions of lines of code from GitHub. While it doesn't know your code, it has surely seen something similar to it in its training set.
As our AI technology has not advanced past the first step of AI evolution, it must be trained extensively on anything it needs to do. For those curious, the next step in the AI evolution is AGI, or Artificial General Intelligence, an AI that can be trained on many things and approach any unknown problem and be able to tackle it. Right now we have to train AI on very specific problems and that's all it can do. A good example would be an AI that can play any board game, right now we need to have to train it on the specific board game(s) it will play, like Chess. Training it on Chess doesn't allow it to play Monopoly.
So the idea with Copilot is not to code for you, but with you and that's why they market it as pair programming. It gives you suggestions and you choose to follow those suggestions, take pieces from it, or ignore it all together. It isn't going to program for you, but mostly save you looking up code examples, although a lot of the times you are on your own. When it works though, it is creepy af!
Let me show you some examples.
I was working with Beem (Python Hive library) with one of the small components of my new project. I was specifically adding some error handling to make sure an account exists before moving forward. I was making what is called a guard clause, a check or gate you put at the start of a complex function to end it immediately if a certain condition isn't met.
try: _ = Account(to) except
I was at this point, I was about to type in the exception name, but knew I had to first import it. I was about to type the import in and this is what I saw.
Copilot knew I needed to add an exception, and it specifically knew which one. Now I am 99.9% confident Copilot was not trained on code that used Beem. So it liekly saw I was typing an exception and like matched the name of the exception, but it was really cool, creepy, and helpful that I only had to hit tab to type it in with the appropriate comma.
Another example that happened just a few seconds before.
I was building in some checks for transfer memos, and I made some changes to add a command to mimimize errors.
When I went down to add a check for it, it knew exactly what I was about to type, well almost. I ended up going the guard clause route (very good programming practice, learn them!) so what I really went with was very close.
if command != 'mint': return 0
In fact, I hit tab anyway and let it fill it in and just changed it to != from ==. Still saved me a few keystrokes.
Now Copilot doesn't just write code, it also assists with commenting. This comment was completely written by AI.
# Confirm mint command if command != 'mint': return 0
From what I have noticed, the AI uses your code as much as it uses the data set it was trained on.
Something I commonly do is fetch blocks, almost immediately after I iterate them. I typically will start complex tasks by creating comments with pseudo code that represent each step I am about to do, so I can replace each comment with each task.
In this case I lead with # fetch blocks, it knew I would then iterate them and wrote some pseudo code to represent that.
If that doesn't impress you, maybe this will.
Here is an example of me creating a function to upload an image to imgur.
I have had this sort of moment a few times in just 48 hours of using Copilot. It's not always right, but it is certainly helpful. In fact, I had to do something like this with a very obscure framework I doubted it had any previous training on, and low and behold it had the function I needed. It wasn't perfect, but I actually used the 10 lines of code and 6 comments in my final code. Granted I had to change some of it and it wasn't exactly what I was looking for, but it took me from knowing very little of what I needed to do, to putting me on the right track.
The function ended up taking me over an hour to write, and ended up only being 6 lines of code. I had to do a lot of digging and experimenting to find exactly what was the proper syntax. Seeing as there really is no example code, it was a painful process. Oddly enough, Copilot knew enough to give me something to work with.
Some of the examples I have seen on the net for Copilot are really cool from writing complete apps and tic tac toe games just using Copilot, my experience is it doesn't suggest anywhere near enough to do anything like that, and in fact using some of these examples never triggered the same code generation. Maybe things changed since then, but I find it doesn't speak up that often, but when it does it is usually helpful, even if it is just to jog your brain to put you on the right path.
I don't see Copilot making an inexperienced programmer a Senior programmer, but it certainly does help save some time from time to time. I am still learning how to take advantage of it better, but there are not many things you can do with it outside of code and see what it suggests. You can however cycle through suggestions with Alt-[ ] to see what other suggestions it has (if it does).
A big concern of mine is my code being shared or used in training. You do not have to worry about this as it will only collect data on what solutions you choose, not how you modify it or any of your code. This helps it improve the model but training is done on selected open source code on GitHub.
If you want to sign up for Copilot, you can apply here.
I have noticed the amount of users with the Copilot extension installed has grown from around 80K to almost 500K, so it looks like they are opening the doors to many more users.