// ISO/IEC TR 18037 S5.3 (amending C99 6.7.3): "A function type shall not be
// qualified by an address-space qualifier."
if (Type->isFunctionType()) {
S.Diag(Attr.getLoc(), diag::err_attribute_address_function_type);
Attr.setInvalid();
return;
}
The comment justifies the code in a way that the code itself never could.
A unit test could document that equally well (although the pointer to the spec would still be useful). Whether or not it would fit your style of programming is another matter, of course.
Maybe I'm missing something. The code that you've written doesn't look any clearer to me than the code from the compiler itself, and without the comment, there's no "why" for the behavior.
Later, if a bug is filed saying that the compiler isn't compliant with the requirements of "ISO/IEC TR 18037 S5.3", how can you be sure to find the code where the behavior is implemented?
I would put that comment in the function header comment, not in the code itself. That's more meta.
And of course the function in question only deals with "ISO/IEC TR 18037 S5.3" so it's easily tested.
If someone files a bug saying the compiler isn't compliant with the requirements of "ISO/IEC TR 18037 S5.3", then they would provide a test case showing not compatible. Add that case to the existing unit tests and you will see which function fails. No need for searching the code to see where the behaviour is implemented. With clean code it's obvious, the test will show this method to be at fault. Even without any comments.
Code in functions should also try and stay at the same level of abstraction, moving low level stuff to abstracted methods that describe the intention (And then the low level method does the how). That way your code reads like a story.
> No need for searching the code to see where the behaviour is implemented. With clean code it's obvious, the test will show this method to be at fault. Even without any comments.
So...it sounds like you'd have one broken implementation of the code, and one working implementation, that might just cancel out the effects of the broken one. You're assuming code that has been cleanly written for its whole history, or a lot of love poured into it to develop quality tests for each requirement. How commonly does that actually happen?
That wasn't the most even-keeled response, and it's past the edit window. What I mean to say is that I've never seen anything but a small codebase that couldn't use some in-code explication.
Clear code provides a clear "how". Good test coverage can act as documentation of the proper behavior and help prevent regressions. But I don't see how it follows that clean code makes the location of each implemented feature obvious. It doesn't seem like it would be inherently true.