[Corrected spelling of “noughts”; thanks, Ray, for pointing it out, and thanks to the Ridger for pointing out that I can change the title without changing the permalink.]
Here's another in my college days series. This one's about some recreational computer programming.
One day in 1975, I decided, on a lark, to write a program to play tic-tac-toe (UK: noughts and crosses). You know the game, surely; it's a simple game, and computers have been playing it since the earliest days of computers. There wasn't any particular reason for me to write one, except that I had some free time — and perhaps more time than sense — and felt like writing it in APL.
As I recall, the program didn't lend itself particularly well to APL, so it wasn't the most concise APL program on Earth. Still it was fun to write. And just as you know it's a simple game, you probably also know that if neither player makes a mistake it will always end in a draw, which we used to call a “cat's game” (for reasons I never knew). Of course, the program never made a mistake — not hard to program for such a simple game, and so it couldn't possibly lose.
Write it. Test it by playing it a bit. Put it away forever.
Well, almost. First I showed it to a friend who was a journalism major (so, not a computer guy... so not a computer guy). He thought it was cool, despite the rudimentary user interface (well, to be fair to the programmer, pretty much all user interfaces were rudimentary in those days): it typed the grid out for you, you gave it your play by entering the number (1 to 9) of the square you wanted to play in, it made its move, and it typed the next iteration of the grid.
My friend liked it, and played it for quite a while. He kept playing it. He kept playing it long after I was sure he'd have tired of it, certainly long after I'd have tired of it. “OK,” I said, “Come on... let's go get some beer.” No, no, he replied... he wanted to play it more. He hadn't won a game yet. “But you can't win. You'll never win.” But surely it'll make a mistake eventually! “No... it won't. It's a computer. It's not going to get tired and ‘slip up’, or anything like that.” But don't computers make mistakes? “Well, there could be a bug in the program, yes. But I assure you, there isn't. It's a simple program, and I've tested it.” Aha! There could be a bug! And he kept playing, playing, looking for that bug. I left him in the computer center, and found others to “drink a pizza” with.
I'd like to say that he remains there, to this day, trying to defeat my program, like something out of a Twilight Zone episode. No, he did eventually show up at Leonardo's, and I think he even caught up with us with his beer count. He never did score a career in journalism, though, at least not as far as I know. And no one ever played my program again.
 On a side note, we often measure the “bugginess” of a program as a ratio, in errors per thousand lines of code. The highest bug rate in history, at least by lore, was in a “trivial” IBM program called IEFBR14. I was going to describe it here, but I see that my colleague John Pershing's description is on Wikipedia, so I'll link to that instead (look at the "history" section).