Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, there are occasional exceptions where you don't have to repeat or replicate the experiments reported in a paper to verify them. But that is very much the exception.

Generally you are expected to explain what you did in enough detail that the reader can replicate your experiment. If you're fitting a protein model to X-ray diffraction data, you aren't expected to include all the other protein models you considered that didn't fit, or explain to the reader your procedure for generating protein models, but you are expected to explain how you measured the fit to the X-ray diffraction data (with what algorithms or software, etc.) so that the reader can in theory do the same thing themself.



Sure, but "I found the structure after 5 months playing around with it in Foldit" isn't that reproducible or informative either.

The result is still the same - a novel fold which is a significantly better fit than existing modules, based on measured vs. predicted x-ray diffraction patterns and whatever other data you might have.

Which is publishable, yes?

When the Wikipedia entry at https://en.wikipedia.org/wiki/Foldit says "Foldit players reengineered the enzyme by adding 13 amino acids, increasing its activity by more than 18 times", how is that much different than "A magical wizard added 13 amino acids, increasing its activity by more than 18 times"?

Or "secret software".

What's publishable is that the result is novel (and hopefully interesting), and can be verified. The publication does not require that all step can be repeated.


I agree!

Unfortunately we have a long way to go to make it easy to repeat the calculation that a novel structure is "a significantly better fit than existing modules, based on measured vs. predicted x-ray diffraction patterns". (If I run STEREOPOLE and it says the diffraction pattern from your new structure is a worse fit, is that because I'm running a different version of IDL? Maybe there's a bug in my FPU? Or the version of BLAS my copy of IDL is linked with? Or you're using a copy of STEREOPOLE that a previous grad student fixed a bug in, while my copy still has the bug? And stochastic software like GAtor is potentially even worse.)

This is something we could and should completely automate. There's been work on this by people like Konrad Hinsen, Yihui Xie, Jeremiah Orians, Eelco Dolstra, Ludovic Courtès, Shriram Krishnamurthi, Ricardo Wurmus, and Sam Tobin-Hochstadt, but there's a long way to go.


>Yes, there are occasional exceptions where you don't have to repeat or replicate the experiments reported in a paper to verify them. But that is very much the exception.

And even in this exceptional case, the algorithm itself is interesting above and beyond the fact of its existence.


It is, but if the algorithm produces a result such as a protein structure or a sorting network that is itself novel and verifiable, you can very reasonably publish that result separately. As long as it doesn't require knowing the search algorithm to replicate your result that the sorting network sorts correctly, which it wouldn't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: