Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Quake II was my first experience writing C.

The code is clear, coherent, and straightforward. I’m not going to say it’s the best source code I’ve ever read, but it set a high bar. I’ve read source code to games since then, and I’ve seen all sorts of weird stuff… I’ve seen functions with nesting levels that go past the right side of the screen, I’ve seen functions a mile long that do a million things, you know. It was an early lesson that you could do cool things with simple code.

We used Quake and Quake II to teach a VR class to kids in the late 1990s and early 2000s. You got a VR headset, you got a half-dozen PCs with Q.E.D. or Qoole, and you taught a room of 12-14 or 14-18 year-old students how to make their own virtual reality game.

“Make your own virtual reality game” was, essentially, making your own Quake or Quake II level, without any monsters or guns in it. The story for the game could be told through on-screen messages triggered in the level. Students made all sorts of levels. One group made a level with slime you had to cross, and a button that turned on pumps to drain the slime (very much like what you would see in a standard Quake II level). Another group made a 3D maze out of water that you had to swim through, floating in space. I still have some of these student-made levels on my hard drive, after all these years, although I have trouble getting some of them to work.

For the class, I made a mod that gave the players a flaregun instead of the blaster. Basically, it was still the blaster, but with a different sound effect and no weapon model. I modified the blaster bolt to be affected by gravity, bounce off of things, and go out after a certain amount of time.

(If you were in that class—you may not have gotten the headset. We couldn’t always get it to work.)



Certainly id's code is clean and well-structured, however I always had a tough time trying to grok it as a layperson. Comments (those within the functions) are relatively sparse, and the code often contains quite cryptic expressions that are somewhat bewildering to those without an high understanding of 3D graphics.

This, of course, makes perfect sense as an output from an industry leader in the field (Carmack). However it definitely requires a lot of foreknowledge and, dare I say it, a mathematical bent to follow with confidence.


"I’ve read source code to games since then, and I’ve seen all sorts of weird stuff… I’ve seen functions with nesting levels that go past the right side of the screen, I’ve seen functions a mile long that do a million things, you know. It was an early lesson that you could do cool things with simple code."

Guilty as charged!

I used to chronically write macaroni code. So you would have like 25 lines of code that ended up being the back bone of like a dozen things that would have another dozen things stacked on top of each. Would execute super fast BUT would amount technical debt quickly and eventually said functions would become untouchable because you would risk breaking a lot of stuff build on top of stuff.

Comments would try to clear it up, but communication skills where not the best. Explain function but not workings. Things like - "Does SINE table". "Table Defrag". "Binary resolve" etc.

Get a lot of people going "What the F*K is this shit?! But... it does the job damn fast". If it was any field other that video games, I would not have lasted long and neither would the product...


Appreciation for that id code means you may appreciate "John Carmack on Inlined Code (2014)": https://news.ycombinator.com/item?id=12120752


I never scrutinized the id software source, but man have I seen some horrifying game source. The bar is extremely low.

What's surprised me is that successful, fun to play, and stable enough games have been shipped with such absolute unmitigated disasters behind the curtain.

It makes me question sometimes all the effort(and time) I put into preventing the chaos, when such carelessly bodged together garbage can be perfectly profitable.


It's nice when you can have both but I have yet to see a game review that mentions how clean the source code is!


That's usually because the high level gameplay code needs to be incrementally tweaked over months or years based on very subjective feedback like "this doesn't feel quite right, can you maybe try...".

You can start with a great plan, but no plan survives gameplay- and balance-tweaking for very long.

Ideally the lower layers are much more structured though, usually those need to be maintained over a longer time and across games.


You can see places in the Q2 code where it has been incrementally tweaked a bunch. The player movement code got a bunch of changes, and you can see blocks of it that are commented out or have #if 0. There are some comments explaining the thought process, but it’s not very clear.


That garbage code brought about the speed run culture, highlighted by charity events like Awesome Games Done Quick. The speed runner and their crew on stage will explain the latest level skips, wall clipping to avoid tedious areas, enemy weaknesses and exploits, and frame perfect input strings needed to accomplish those. And some will perform speed runs blindfolded with only the audio cues to work with.


That hardly has to do with code quality, but instead that some edge-case "features" were not anticipated by the game designers, programmers and QA team, but "discovered" later by speedrunners.


Sometimes it is about code quality. Defensively written, well-documented and well-tested code would not be as likely to suffer from such bugs.

It's not just subtle floating-point errors either, but also outright wrong maths that appears to work (see [1] for an example), memory leaks [2], buffer overflows [3] etc.

[1] https://www.youtube.com/watch?v=4gNYTqn3qRc

[2] https://news.ycombinator.com/item?id=20410166

[3] https://arstechnica.com/gaming/2021/03/how-to-beat-paper-mar...


I think they made the right trade-off, back in the 90s on hardware that could barely run the game, sticking extra defensive checks to make sure you have not accidentally clipped through the geometry, or simpler ones like making sure you are not running too fast, seems excessive.

They weren't coding for some NASA space probe and they knew it.


Typically you'd put the checks only in debug builds.


The id software source code gets increasingly cleaner for each game I think. I've read the Doom source code and Quake II's. I remember Quake II being very clean and elegant. Doom is not quite there. It's fairly well organized, the functions are generally short and concise, but it's a mess of global variables.


Can you share any screenshots of the results of this course? It sounds amazing!


Whoa, I still remember watching a show about VR on the Discovery Channel around 1999 when I was a kid. They used Quake 2 as a demo, and it blew my mind. I was playing Quake 2 on my PC also, but the concept of VR was revolutionary to me at the time. It's stuck in my head to this day. To now learn that using Quake 2 was a common practice for teaching VR. Maybe it was you in that documentary!


Talking about clear and coherent, does anyone know if there is a deeper reason for using `break' and `continue' instead of pure structured programming? The snippet

  i = ent->client->chase_target - g_edicts;
  do {
     i++;
     if (i > maxclients->value)
        i = 1;
     e = g_edicts + i;
     if (!e->inuse)
        continue;
     if (!e->client->resp.spectator)
        break;
  } while (e != ent->client->chase_target);
for instance, from the function ChaseNext in the file original/rogue/g_chase.c can be reduced to

  i = ent->client->chase_target - g_edicts;
  do {
     i = i % maxclients->value + 1;
     e = g_edicts + i;
  } while ((! e->inuse || e->client->resp.spectator) && (e != ent->client->chase_target));
which in my opinion is clearer since the exit condition for the loop is in one place.


I find the latter - with all the conditions much less clear. I have to pause and reason about each condition and it’s relation to the others, the order of the comparisons and the effect of short circuits.

The first example makes it very clear that condition A means do it again and condition B means exit this loop while condition C establishes the boundaries of the loop.


I think it's debateable which is clearer. Your while condition requires some mental parsing of the booleans whereas the original can be analysed one at a time.

I'd say I prefer the original.


For me the original is much clearer and is how I normally structure my iterators/do/for loops, since explicitly checking for conditions and using 'continue' when the iterated value is not worth processing makes it nice and clear that everything _after_ these checks is valid and can contain the meat of the processing without more if/else checks.

This approach also works fine for c++ and c# code (probably also java too), which I like as it keeps my projects nice and consistent and I can jump between codebases and feel at home.

The only thing that I don't like about it is the if() statements not being contained in {}'ies. That always makes me nervous.


One nice property with (pure) structured programming is that the statement

  do {
     ...
  } while (p);
guarantees that p is false after the loop and this makes it easier to reason about the program. With breaks, however, the loop could be "lying" so to speak.


Agreed. I think it's also a little easier to understand what's going on when stepping through the code in a debugger.


Using % vs comparison and reset was probably more efficient with the compilers of the time; might still be.

Also I doubt everyone would agree that the one exit condition is more clear. For example, if I know that if e->inuse is 0 it will continue with the next round. Arguably it's more difficult to understand that from the single combined expression—because it isn't so: if !e->inuse but e == ent->client->chase_target, the original will loop but yours will exit.

Though I'm guessing ent->client->chase_target->inuse is probably never 0.


> if !e->inuse but e == ent->client->chase_target, the original will loop but yours will exit.

No, the continue statement will jump to the test part also for `do' loops:

"The continue statement is related to break, but less often used; it causes the next iteration of the enclosing for, while, or do loop to begin. In the while and do, this means that the test part is executed immediately"

-- "The C programming language (Second Edition)"


Well today I have learned something I may have forgotten, thank you :). It seems natural in for and while so it does make sense also the do loops use it.


Seconded that the main reason was probably avoiding a mod. But it's not because of the compiler, but the processor. Mod and division instructions are always more expensive than additions and multiplications. If this function was on the hot path (and looking at where it's called, it's at least called once per frame), that may have mattered a lot.


A modern optimizing compiler would quite likely do induction variable elimination/strength reduction and replace the % with increment-and-reset if it deemed it more efficient, so it’s also about the compiler.


Pretty sure your line 3 and the original lines 3-5 are not equivalent. Check it with `i` entering the loop with value `maxclients->value`.

The best reason not to re-write for style reasons is you break it.


Thanks, now corrected.


If `i` can enter with value greater than `maxclients->value` (why not), it's still wrong.


Break, continue, return (and arguably even C's goto before variables could be declared in the middle of a scope block) are all entirely normal tools in the structured programming toolbox. How those are used is just personal taste.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: