All right, we're back! We're going to be fuzz testing now, and I had to struggle a little bit because I haven't done go fuzzing before, and what I did here as well in the previous episode: I told it to do one to a in our Excel column index mapping, but then midway through recording, I told it to do zero, so it did some weird stuff there. I went over it.
This is the kind of things that LLMs have a notoriously hard problem with, hard time with is just basically reasoning and computing stuff. They can't do: they operate on probabilistic vibes. So I had to compute those by hand, like some of them, and made all the unit test pass, actually just had it like tab complete this because it probably have seen this code enough.
And then in preparation for the fuzzing, I just created the inverse. Just so that in the fuzzing test, I can convert an index to alpha and an alpha back to index and the two should match. So, however in go, you have to write a fuzzer this way: you write a function that's called fuzz something, it takes a testing F and inside that you create a seed bag. And then using the seeded list of starting points, you pass that to the fuzzer, which will start mutating them according to heuristics.
So I had to read up on a tutorial on how to do that. You know, this is not something I know well, so I didn't want to have ChatGPT make stuff up. I knew otherwise I would probably waste time. This is relatively new, so it probably hasn't seen it much in its training corpus either. So I read up on this. This is the way you do training corpus stuff, this is the way you do the the test stuff, not training corpus testing, fuzzing, corpus.
So I had to experiment a little bit to have a good grippy prompt for a go fuzzer probably because it hasn't seen this enough yet. And this is what I came up with is I pasted the whole function. And then, after the whole function, I said: "what is a good way to test this function to fuzz this function?"
if I just left it at that and I didn't add this, the bottom here, it would just like, give me generalities on fuzzing. So let's see: what it says here, isn't it going to be like, oh yeah, fuzzing is this and this and this. I called this corporate drone hiring interview bullshit. It's like stuff, wait, it's like not wrong, but it's also not good.
And this one, because it's probabilistic, sometimes it gives good results, sometimes gives bad results. So this actually looks pretty good, I think because I gave it the body of the function. If we remove the body of the function and just give an interface, it probably is going to be very generic.