Hacker Newsnew | past | comments | ask | show | jobs | submit | xondono's commentslogin

> Most of the harsher regulations only come into effect when the company hits a specific size.

That’s very market and country specific. Spain makes more than 1k tweaks to it’s food regulations each year, which would kill lots of restaurants if they were to be in compliance.

The result is that everyone tries to make as much money as they can and build a “inspection fund”, because you’re guaranteed to get a fine if inspected.


That's BS, you can have spies with or without unique IDs, and there's better ways to get votes than creating fake people.

Also, a lot of countries do have IDs...


I’m honestly very tired of this argument, everything about it is bad.

Features aren’t rights, if you want a phone that let’s you run whatever you want, buy one or make it yourself.

What you’re trying is to use the force of the state to make mandatory a feature that not only 99% users won’t use, it vastly increases the attack surface for most of them, specially the most vulnerable.

If anyone were trying to create a word that gives a “deviant” feel, they wouldn’t use “sideload”, and most people haven’t even heard the term. There’s a world of difference between words like “pirate”, “crack”, “hack” and “sideload”.

If anything I’d say it’s too nice of a term, since it easily hides for normies the fact that what you’re doing is loading untrusted code, and it’s your responsibility to audit it’s origin or contents (something even lot’s of devs don’t do).

If you want to reverse engineer your devices, all the power to you, but you don’t get to decide how others people’s devices work.


It's a proper argument on its surface, complete with claim, warrant, and impact.

"Features aren't rights" > see: Consumer Rights.

"Force of the state making sideloading mandatory is bad" > ...Except we have antitrust laws? The Play Store becomes the only source of apps, all transactions are routed through Google Billing? Not a problem for you?

"99% users won't use" > Except for when Google demands that transactions happen exclusively through Google Billing, which resulted in the release of the Epic Games Launcher for the world's highest grossing games by download.

"Sideloading is too nice" > Listen, either it's the case that "sideloading" is a threat to normies or it's not. Are normies your 1% or 99% of users? I thought according to you 99% of users won't sideload.

"You don't get to decide" > That language ties in pretty well with your fear of the use of the 'force of the state'; that tells me that you support freedom. Great-- you're right, why not let corporations be corporations and do anti-consumer things, they'll be very good to us (while they lobby the state).


> "Features aren't rights" > see: Consumer Rights.

Consumer rights aren’t features, and they’re very intentionally written to not be.

> "Force of the state making sideloading mandatory is bad" > ...Except we have antitrust laws?

Then sue them over those.

> Listen, either it's the case that "sideloading" is a threat to normies or it's not. Are normies your 1% or 99% of users? I thought according to you 99% of users won't sideload.

I meant that 99% of users aren’t afraid by the term “sideloading”. That you’re not using something doesn’t mean you’re afraid of it, it just means you don’t want it.

> you're right, why not let corporations be corporations and do anti-consumer things, they'll be very good to us (while they lobby the state).

Because corporations tend to die when they do anti-consumer things, but governments keep doing anti-citizen things without much trouble.


"Consumer rights aren’t features" > Any attempt to weasel out of a marketed feature set is generally and colloquially known as "false advertising"; consumers have a right to the features of a product they purchase under the original conditions of the purchase agreement.

"Then sue them" > My point was that the force of the state is a necessary evil to ensure fair competition. Yours implied that the force of the state is overreach, but if you warrant that, then you wouldn't enjoy protections against corporations afforded to us by antitrust law.

"That you're not using something..." > For you to claim that sideloading presents additional threat surface to the normie consumer, you need to also claim that normie users are sideloading. This means that if 99 percent of users are not sideloading, there is no threat surface.

"Because corporations tend to die when they do anti-consumer things, but governments keep doing anti-citizen things without much trouble." > Absolutely not. The paradigm has changed from the time when you could vote with your dollar. You and I are economically and legally irrelevant (where is Congress, anyway?), and corporations like the Big G are too big to fail. They are -already- colluding with government to do both anti-consumer and anti-citizen things.

Nominatively, this is why both the government AND google do not want you to side-load software outside of their control.


> You don’t get to decide how others people’s devices work.

Perfectly reasonable. It's important that people can decide how their devices work for themselves. No one else should decide for them.

But I'm genuinely curious how you see this principle working in practice when there's effectively a duopoly. What's the path for someone who wants to still have any choices for their device? I'm not seeing an obvious answer, but maybe I'm missing something.


There isn’t a duopoly, it’s just that the two top contenders are way ahead of the rest, so wanting that niche feature requires a big sacrifices.

Nowadays it’s not even that hard to build your own phone, but it’s not going to be a slick smartphone for sure


It's not possible to build your own phone in most markets anymore. Without iOS or Google Play Integrity you won't be able to install or run essential apps required for banking, taxes, healthcare, public transport, etc. This makes it impossible to compete because anyone who buys your phone are required to also buy a secondary Google approved Android or iPhone to lug around in order to function in society.


> Good luck building nuclear in non-generational timescales and at reasonable prices.

Or we could treat nuclear rationally and stop increasing the price three orders of magnitude past diminishing returns..


> Or we could treat nuclear rationally and stop increasing the price three orders of magnitude past diminishing returns

Who is we here? Do you have examples of any countries having successfully done what you are proposing?


'We' could refer to democratic societies that regulate nuclear energy with absurdly stringent standards beyond how we regulate other forms of energy. Just the regulatory cost of approving a new small reactor design exceeds 500 Million Dollars! That's the lifetime earnings of thousands of engineers and bureaucrats.


$0.5B is a tiny rounding error in the cost of standing up the first GW of a new tech. If SMRs could be built for $10/W, which is overly optimistic, that would be $10B. Much more likely is $30B-$50B for that first GW. And SMRs are not even going to start getting to a halfway competitive cost until at least several GW in. If they can eventually get to $5/W they might have a chance at competing for a fraction of the grid.

All this is to say that if there are high costs imposed by regulation, it's not the regulatory process it's in the cost of building the final design.

However, the "regulations make nuclear expensive" folks never seem to be able to propose the changes that might make nuclear cheaper, or by how much. The only concrete proposals I have heard are from people skeptical that nuclear can ever be cost competitive!


> Who is we here? Do you have examples of any countries having successfully done what you are proposing?

Does it really matter? There’s always a first country to do anything.

It makes no sense actual exposure to radiation is increasing because of the lack of nuclear plants…



And still even China is adding as much solar as their total nuclear capacity on a yearly basis.


> Do you have examples of any countries having successfully done what you are proposing?

France pre 21st century, China, Korea, Poland.


South Korea had a massive corruption scandal. I guess it takes cheating to deliver?

https://www.technologyreview.com/2019/04/22/136020/how-greed...

China is barely building nuclear power. In terms of their grid mix it is backsliding.

Poland haven’t built any so noconfirmed numbers yet?


Since when does Poland have a significant nuclear power generation program?

https://en.wikipedia.org/wiki/Nuclear_power_in_Poland


Does anyone have actual numbers on what France’s nuclear fleet cost? I thought it was somewhat shrouded in mystery due to government and national security subsidies.


> national security subsidies.

The bit they always say quietly is that you need nuclear reactors to provide the material for nuclear weapons.


This has been in Rust since before the 1.0 release.

Also, you can’t converge to a diverging number, for Rust to get close to C++’s level of complexity, the C++ WG would have to stop with their garbage.


> that’s some Apple level QA

Are you nuts?

X-ray inspection is not that rare, there’s even small assembly houses here (Spain) that can do xray automated inspection.

This has been standard for years to the point I’ve been sent forms for assembly houses RFQ where there are checkboxes for xray inspection, and I haven’t handled a serious assembly development in ~4 years.

What’s new and they’re advertising here is CT, which is another level.


Technically speaking, being published means at least the editors have reviewed. The quality of their reviews is another thing entirely.


Well... it can be desk rejected. Which I've actually had happen because the paper was already on ArXiv, even though it wasn't against the journal's policy. Took 4 weeks to resolve and then got desk rejected again for "not citing the correct works", with no further information... I don't submit to that journal anymore...


Sure it can, but then it’s not published.

My only point is that if you get published in a journal, at least two people have seen it.


That’s because we’ve basically reinterpreted what “peer review” is.

Peer review used to mean “some peers have reviewed it”, mainly the editors, who pushed for correctness and novelty. There was a clear difference between publishing and making a paper public. It never meant “it’s right”, but it meant “it has passed basic quality control and it’s worth your time to read it”.

Modern day academics push people to fragment into ever smaller niches, meaning most editors are nowadays completely out of their depth when evaluating papers, so now we keep referring to editor approval as “peer review” and try to diminish the public perception that comes with it.


This is not true. In most of the top journals you need at least three other practitioners in your field to read it and sign off on it. The editor finds the appropriate reviewers, manages the process, does some basic format and other types of vetting, and also will accept or reject it based on the reviews from the reviewers.

The reviewers here are the "peers", and generally are expected to be qualified experts in the area that the paper deals with.


  > This is not true. In most of the top journals you need at least three other practitioners in your field to read it and sign off on it.
You're misreading xondono as well as me. I think your idea of what peer review is (in practice) is too idealized.

The problem is the word "expert". We're using it to mean different things, and the difference is important. Despite it appearing that way, "expert" is not a binary condition. It is a spectrum. Where along the spectrum requires context to determine the threshold. Ours (xondono, correct me if I misinterpreted), is higher than the one you're using.

Finding appropriate reviewers is a non-trivial task, which is kinda the entire problem. You can have a PhD in machine learning and that does not mean you're qualified to review another machine learning paper. I know, because I've told ACs I'm not qualified for certain works!

The problem is that what is being published is new knowledge. I'll refer to the (very very short) "illustrated guide to a Ph.D." How many people are qualified to determine if that knowledge is new? It's probably a lot fewer than you think. Let's go back to ML. Let's say your PhD and all your work is in Vision Transformers. Does that mean you're qualified to evaluate a paper on diffusion models? Truth is, probably not. Hell, there's been papers I've reviewed where I'm literally 1 of 2 people in the world who are the appropriate reviewers (the other is the main author of the paper we wrote that's being extended).

Hell, most people working on diffusion aren't even qualified to properly evaluate every diffusion paper! Here's a great example, where this work is more on the mathy side of diffusion models and you can look at the reviews[1]. Reviews are 6 (Weak Accept), 9 (Very Strong Accept), 8 (Strong Accept), 8, 6. Reviewer confidence was even low: 2, 4, 3, 3, 4, respectively (out of 5), and confidence is usually over stated.

Mind you, this is the #1 ML conference and these reviews are post rebuttal. There were over 13000 people reviewing that year[2] and they couldn't get people who had 5/5 confidence. This is even for a paper written by 2 top researchers at a top institution...

  > The reviewers here are the "peers", and generally are expected to be qualified experts in the area that the paper deals with.
So no. They are "expert" when compared to the general public, but not necessarily "expert" in context to the paper being reviewed.

I hope the physical evidence is enough to convince you, because honestly this is quite common and there's a viewing bias. Most of the time we don't have this data for works that were rejected. But there's plenty of works that were accepted that you can see this. Not to mention (as stated in my original comment), multiple extremely influential works (worthy of a Nobel Prize) have been rejected. Here's a pretty famous example, where it had both been rejected for being "too trivial" (twice) as well as "obviously incorrect."[3] Yet, it resulted in a Nobel and is one of the most cited works in the field. Doesn't sound like these reviews helped the paper become better, sounds more like it was just wasting time.

[0] https://matt.might.net/articles/phd-school-in-pictures/

[1] https://openreview.net/forum?id=NnMEadcdyD

[2] https://media.neurips.cc/Conferences/NeurIPS2024/NeurIPS2024...

[3] https://en.wikipedia.org/wiki/The_Market_for_Lemons#Critical...


I reviewed many papers when I was still in academia, I know how it works thank you. Yes I too have declined to review a paper or two because they reached out to the wrong person.

But no, I don't agree with you in your stringent definition of expert. IF someone is in the general area and is aware of the problem you are trying to solve, that is good enough. E.g. someone who is in machine learning and aware of diffusion, has read papers on it, but has not done work on it themselves, is a good enough of an expert to review a diffusion paper.

These papers are supposed to be written to a general enough academic audience that someone like the above is able to understand and critique your work.

Also, if you are as experienced as you claim to be, you should know that conferences are notorious for having FAR weaker peer review than actual journals. That's why many works used have both a conference version and a larger journal version. For conferences, due to the time limits on review, they often don't have enough qualified papers to review them all. There are also no do overs, if you get an unqualified reviewer you can't request another person.

There are even papers published about the poor quality of conference reviews!


  > Also, if you are as experienced as you claim to be, you should know that conferences are notorious for having FAR weaker peer review than actual journals.
Yes, but conferences are the primary publishing venue used in computer science and ML. Publishing in NeurIPS, CVPR, or ICML is more prestigious than publishing in JMLR or TMLR

While worse at conferences, I are, the fundamental problems are similar. They exacerbate the problems, but it's still the same problems

And please don't be offended. I'm writing to a general audience. I don't know if you have academic experience or not until you tell me. It seems you agree it would not be appropriate for me to assume otherwise


> You're misreading xondono as well as me. I think your idea of what peer review is (in practice) is too idealized.

I think it is you who has an ideological axe to grind and is missing the forest for the trees (in this case the practical benefits for the drawbacks). Of course the process isn't perfect. Of course it's a spectrum. That's precisely how journals end up with reputations.

If you don't want to play the reputational game, fine, self publish on your website. Protocols such as ipfs and centralized archives such as arxiv make that easier than ever. But just because you choose to reject a process doesn't mean that it isn't of benefit to other people. And it should go without saying that just because something is of benefit to me (in this case as a reader) doesn't mean that it isn't also flawed in some way.


You haven't convinced me, you only made an appeal to authority. It's fine if you don't accept my evidence or reasoning but just appealing to authority or tradition is not an argument that the current system is better than an alternative one.


I made a few claims but I don't believe I made any appeals to authority. That would be of the form "peer review is good because X says so, therefore you are wrong". If you wish to challenge any of the claims I made I am open to it.


> That’s because we’ve basically reinterpreted what “peer review” is.

Who is "we" in this scenario? Because that's certainly not how I've seen peer review work.

The editor would ask a small group of people in the field to act as reviewers and then send them the papers. They review it and send it back with any requests for changes prior to publication.

So they're the peers that are reviewing, not the editor.


Look at the history of Peer Review. What you see post 1950 is pretty different than what you see prior to that. I think this quote is the best one-liner, though I think everyone should dig much more into the question

  > in the early 20th century, "the burden of proof was generally on the opponents rather than the proponents of new ideas.
That is, the reviewers had a higher burden than the authors. The bias is towards acceptance rather than rejection. In a perfect world we could only accept good papers and could reject bad papers, but we don't live in that world. So the question is "when we fail, which way do we want to fail?" Obviously, I'm on the side of Blackstone here

https://en.wikipedia.org/wiki/Scholarly_peer_review#History


But that's the opposite of what the person I'm replying to said. They're saying everything is acceptable and I'm saying it's actually reviewed.

Ok, maybe that's not what you meant. Peer review doesn't reject papers because they don't agree with the orthodoxy; they reject them because they're not competent. Is that what you were getting at?


I'm a bit confused. What you described is what happens today. Yes. That has been my experience too, serving as a reviewer. I understood xondono to be referencing the history that I mentioned, which is where these reviewers didn't exist. So the requirement was different, which is what I'm saying about the burden of proof.

  > Peer review doesn't reject papers because they don't agree with the orthodoxy; they reject them because they're not competent.
This is absolutely false and I don't know a single academic who hasn't seen competent papers get rejected.

Reviewers can reject for any reason. The system is built on trust, but incentivized to reject. "Not competent" is too vague of a term, just like "not novel."

In my other comment[0] I even reference one of the famous works that got rejected 3 times for being "not competent". This isn't a one-off case here, it is a common occurrence. On several occasions I've had to champion papers which were clearly competent yet my fellow reviewers simply were not familiar with the domain (they admitted this during discussion). I've also killed papers for similar reasons (a very rare event as I strongly bias towards accepting).

So I'm sorry, saying papers are only rejected because they are "not competent" is incredibly naive.

And I'm sorry, but the claim that "works aren't rejected because they don't agree with orthodoxy" is simply laughable. There's a long history of peers rejecting discoveries that upset the norms. This has happened to the majority of well known scientists. I'm not talking about like the Church going after Galileo, I'm talking about things like Galilelo arguing with Tycho Brahe or Christoph Scheiner. Einstein was critical of Bohr. Hertz was critical of Bell. The list goes on and on. The criticism was explicitly about running counter to orthodoxy. This is such a common thread in history that there's even Max Planck stated "Science advances one funeral at a time."

[0] https://news.ycombinator.com/item?id=44587535


But wouldn’t a consequence of failing to err on the side of caution and “orthodoxy” mean a proliferation of junk and pseudoscience? We already had that problem now but the change you are proposing seems to put a foot on the gas for those issues.

Sure, some valid papers get rejected, but how many bad papers are also rejected? How would it affect the wider scientific community if 1 additional good paper is “approved” and also 100 bad papers? Does good science still make it through eventually, or are these rejected papers losing valuable insights “forever”? What kind of damage can a bad paper cause and how often?

It just seems like it isn’t as clear cut as “the current system is flawed…and the alternative is objectively better” to me.


I think it is good to have that concern, I just don't see strong evidence that more open publishing results in more junk. I hear the arguments that there's already a lot and erring that way would only create more, but tbh, I think it would be mostly the same. Frankly, reviewing just doesn't scale very well. Even plagiarized material routinely gets published in high ranking venues. I just don't see how this gatekeeping provides strong protections against that, but I do see how it actually perpetuates it.

I know that last part sounds weird, so let me explain with an example[0]. This paper was rejected for plagiarism. What's unique about this case is it is public. ICLR lays it all out. You can click on the names and look at their g-scholar pages or better, DBLP, which allows you to better look at who they co-author with. Maoguo Gong here has almost 30k citations and Qiguang Miao just over 10k. The problem is that were they to publish elsewhere, we wouldn't have this record. So it makes it hard to track. Sure, we can open up rejections in all venues, but at that point we're honestly not too different from what I (and others) am advocating for.

What we're advocating for is something like people directly publishing to OpenReview, where we can have the comments on record, link to GitHub, datasets, or whatever. I'd even go further and allow different formats of publication, like blog style. PDFs are great for some domains but not for others. 8 page limits are great for some topics, but also not for others.

IME, there are very few (by percentage) bad actors in the space. That's much different than the percentage of bad papers, mind you. But there are larger problems that incentivize bad papers. Rushing for deadlines and the publish or perish paradigm are the two most obvious.

Importantly, I think we also need to ask who papers are being written for? IMO, it is incorrect to write them for broader audiences. That is the word of science communicators. Where that line is, I think is very subjective. I'd rather the authors decide who their audience is.

I'm all for better communication, but I've only seen the current system make communication worse, not better. Again, this sounds counterintuitive, so I want to provide another example[1]. This is an egregious case where I think the problem is clear to a wider audience (I don't know your background), but frankly, I see stuff like this happen all the time. Where something simple is convoluted to make it appear more rigorous. One of the reasons I believe this happens is due to the need to get past the gatekeepers. Truth is that most reviewers do not spend much time with most papers. To really understand a paper you tend to need to spend hours with it (it's a lot of work!). Frankly, reviewers like to reject because it makes the job easier[2], and the system incentivizes this. The obscurification doesn't exactly come with malintent, but rather small steps. It can even be something small like adding an unnecessary math equation because a reviewer only glancing through will see math and think it better. Unfortunately, we have to play these mind games while writing papers. Yes, better graphics can help communicate works, but we also don't want to stray so far as to make it a priority. Listen to most researchers about how they review papers (which I tend to differ from). They place a lot of focus on graphics and tables. These are important, but they are also meaningless without context.

The problem is that the system is noisy. What people like me are proposing is that we accept and embrace noise rather than sweep it under the rug. The truth is that the noise cannot be avoided. In science, we specify error in our measurements for the same reason. Error is a measurement of uncertainty. By rejecting uncertainty, you only make your measurements less accurate, not more. That's the root of the issue here.

  > and the alternative is objectively better
You're right. And I'm happy to admit that my answer is not globally optimal[3,4]. But I think there is no globally optimal solution. Which is okay. There's many problems with no globally optimal solution. There's always some trade-off. I just believe that the bias should be akin to Blackstone's Ratio rather than the inverse. I believe biasing towards the inverse only makes false positives more detrimental, and we're not doing a great job at tackling the problem. Usually, that means we need to have a substantial rethinking. If the conventional solutions aren't solving the problems, maybe it is time to explore unconventional ones.

Most importantly, I think we need to have an open conversation. I'm certain there are better solutions than the one I've proposed (there's ones I even believe, but they take more to explain and build from here). There's so much we haven't even begun to discuss. It's a hard and complex problem, but isn't that what we researchers are trained for?

But the biggest problem right now is that the conversation tends to be shut down with an appeal to tradition. "Don't fix what isn't broken" is a fine policy, but it only goes so far. Unfortunately, the logic tends to more often be used as an excuse to ignore problems. If there are cracks in the system, surely you want to fix it before it breaks, right? I know I'm not alone in believing that the damn looks ready to burst. I'd like to try to avoid that fallout, if possible.

[0] https://openreview.net/forum?id=cIKQp84vqN

[1] https://youtu.be/Pl8BET_K1mc?t=2456

[2] Personally, I don't. I still spend hours with bad papers and will write lengthy reviews. I'm "on their team". I want them to make the best work that they can. The only "easy" reviews are really good papers and really bad papers, the former being exceptionally rare. But this is also why the review process becomes so subjective. It's very hard to define what constitutes very good and very bad. But I don't think I'm "doing my job" if I am just looking to be done with the job. I am "doing my job" by reading the work earnestly and providing the best feedback I can. I'm not going to waste my time, I'm not going to waste the authors' time, and I'm not going to waste the time of the next reviewer down the line that reads it when it gets resubmitted. I'll mention that I've been frequently recognized as an exceptional reviewer. Not to brag, but rather to give evidence to the implicit claim that I have expertise here.

[3] https://news.ycombinator.com/item?id=44587677

[4] https://news.ycombinator.com/item?id=44587822


I think they meant "reinterpreted" over the last century, not over the span of your personal experience and career.


They're saying it changed to not being reviewed properly and I'm saying from recent experience that it is.


I pretty much agree with you but wanted to nitpick this part

  > mainly the editors, who pushed for correctness and novelty.
I don't want to use the word correctness here[0], because no one checks if the work is correct. Rather, I'd say the goal is to check for wrongness. A peer reviewer cannot determine if a work is correct simply by reading it. The only way to do this is replication or by extension (which is the case of the work here. The physical verification was an extension of the earlier work). It's important to make this distinction because, as you say, it doesn't mean the work is right. Nor does it even mean the the readers think it is right.

In the past, many journals published as long as they did not think there were serious errors and were not plagiarized. Editing is completely different, where we want to make sure works are communicated correctly.

But I purposefully didn't say "novelty"

It is a trash word that means nothing. The original intent was that work wasn't redone. That you can't go in and take credit for discovering something someone else did, which we'd cal plagiarism. You could change all the words and still plagiarize.

It is VERY easy to find problems/limitations with works. All works have limitations. All works are incomplete. But are these reasons to reject? Often, no... You see the same thing on HN and it's a classic bias of STEM people. Hyperfixate on the issues. We're trained to, because that's the first step to solving problems! But that's not what matters in publishing, because we're not trying to solve all problems. We do it iteratively! It also runs counter to quickly publishing ("publish or perish") as what, you want to wait to publish till we got a grand theory of everything? And don't get me started on how bad we are at predicting impact of works and how impact often runs counter to the status quo (you can't paradigm shift by maintaining the paradigm). So we don't explore...

AND very frequently, we DO NOT WANT novelty in science. Sounds strange, but it is *critical* to science existing.

- Our goal is to figure out how things work. The causal structure of things. So this means works need to be reproducible. We *want* reproductions, but we also don't want them ad infinitum.

- We *also* want to find other ways to derive the same thing. Some reviewers will consider this novel while others won't, typically inversely related to their expertise in the field (more expert = more likely to consider novel while less expert means you can't see the nuanced differences which are important).

This greatly stifles innovation and reduces how well papers communicate their ideas.

The problem here is as we advance, nuances matter more and more. Think of it as with approximations. Calculating the first order term is usually computationally easy, with computation exponentially increasing as the order of accuracy increases. The nuances start to dominate. But by focusing on "novelty" (rather than plagiarism) we face the exact problem you mention.

  > most editors are nowadays completely out of their depth when evaluating papers,
So authors end up just making their works look more convoluted, to look more impressive and make it look less like the work that they are building on top of. High experts can see right through this and as grad students usually groan but then just become accustomed to the shit and start doing the same thing. Because, non-niche experts cannot differentiate the work that's being built upon from the new work.

It is a self-inflicted problem. As editors/reviewers we think we're doing right, but we're too dumb to see the minute (but important) differences. As authors we're just trying to get published, keep our jobs, and it's not exactly like the reviewers are "wrong". But it often just becomes a chase and does nothing to help make the papers actually better. This gets even worse with targeted acceptance rates, as it incentivizes reviewers to reject and be less nuanced. Which they're already incentivized to do because there's just so much stress and time crunch to the job anyways (including needing to rewrite papers because others did exactly this).

The targeted acceptance rates are just silly and we see the absurdity in domains like Machine Learning[1]. We have an exponentially increasing number of papers to review each year. This isn't just because there are new works, but because works are being resubmitted. Most of these conferences have 30% acceptance rates but the number of "wrong" papers is not that low. We also know the acceptance rate is very noisy for the majority of papers, where a different set of reviewers would result in a different outcome (see the multiple "NeurIPS experiment"s). You can do an easy model to see why this is bad. It just leads to more papers and if the number of reviewers stays the same, this is more reviews that need to be done per reviewer, which just exacerbates the problem. If you have 1000 fixed papers submitted each year and even a low percent of rejected works resubmitting the next year, like 10%, you actually have to review ~1075 papers. With a more realistic ~50% of rejected works getting recycled, you need to actually review ~1500 per year. Most serious authors will try a few times, and it is a common to say "just keep trying".

We don't have to do this to ourselves... It helps no one, and actually harms everyone. So... why? What are we gaining?

It's just so fucking stupid

/rant (have we even started?)

[0] I'm pretty sure we're going to agree, but we're talking in public and want to make sure we communicate with the public. Tbh, even many scientists think "correctness" is the same as "is correct"

[1] It is extra bad because the primary publishing venue is conferences. To you submit, get a review (usually 3), get to do a rebuttal (often 1 page max), and then the final decision is made. There is no real discussion so you have no real chance to explain things to near-niche experts. Worse with acceptance deadlines and overlapping deadlines between conferences. It is better in other domains since journals have conversations, but some of these problems still exist.


I’m hardly a “Captain”, and I’ve successfully negotiated salary increases several times.

The only thing you need is leverage, and yes, rank-and-file are not going to have a lot of leverage at a FAANG.

You can’t negotiate if you’re not willing to let the job go through.


Sounds like a reinvented HAM radio club to me


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: