Hacker Newsnew | past | comments | ask | show | jobs | submit | pwdisswordfish9's commentslogin

Got some links? Would love to read.


https://github.com/llvm/llvm-project/issues/37930

Here’s lattner telling some guy off. There’s none that I can think of that are super memeworthy or anything. It’s usually just insecure devs with a little Dunning Kruger cocktail in the mix.

Certainly not anybody bothering dozens of people manually, for months, like in the link. That’s pretty wild.


Why are you blathering about tea?



That depends on how selfish you are.


While I'm sure it's annoying, this post is not a good look and reads rather unhinged. I'm also not sure what it's meant to accomplish other than venting.


Agreed. Remove all hyperbole and the crime here is... sending too many pull requests? I'm skeptical.


As if apple would have their services obey firewall restrictions.



The QuickDraw source code has been available for some time:

<https://news.ycombinator.com/item?id=2285569>

Submitted several times to HN since then. Most recently:

<https://news.ycombinator.com/item?id=16519132>


> Parsing HTML isn’t trivial: aside from bad/invalid HTML (think: missing opening/closing tags, quotes, etc), there’s also a lot of content that requires javascript to render in the first place, for example, which means the page needs to be rendered and have access to window and DOM, etc.

Double standard. If you're going to make a fair comparison, then you need to compare like with like; you need to compare the subset of things about e.g. HTML that give you what you can also get with a screenshot. It makes no sense to hold the performance penalty of script execution against browser runtimes when (a) you don't have to execute any scripts to effect anything that gives you parity with a static image, and (b) you can't with static images do anything like what executable scripts enable.

And whether or not parsing HTML is trivial (which is debatable), it's still not strictly greater than the computational resources that are needed for the kind of computer vision and widgetry that lets you e.g. select the text in a screenshot...


Speaking of Atom/RSS:

There's no reason to have a separate /archives resource and a /feed.xml (with or without content negotiation). You can just specify some external XSLT with an xml-stylesheet processing instruction in your feed XML that will cause the feed to be rendered nicely when it's opened in the browser...



You don't need an extension to do that. You can get by with a bookmarklet, which would be wise, considering how much contempt browser makers have for extension authors.


Wait...a bookmarklet can do stuff on multiple pages?

I thought that if I had a bookmarklet and invoked it on a page, and the bookmarklet navigated to a different page the running instance of that bookmarklet would go away.


A bookmarklet is more or less just a JS expression that executes on command (the user's, that is). The execution model is approximately the same as for content-delivered scripts, and it's subject to the same constraints.

So, yes, if you just naively write a bookmarklet that navigates to a new page with e.g. assignment to window.location and then expect any result other than the next line of code not executing, then you're going to be disappointed. You solve this the same way you'd solve it if you were writing an ordinary Web app--implemented in JS delivered by the server with script elements on your own page. Two stupid easy solutions that immediately come to mind: use XHR/fetch instead of actual page navigation; alternatively, have the bookmarklet open up a ~postage stamp-sized window with window.open that you can use both to output visible diagnostics and to keep the crawler resident (by doing all the work in the diagnostic window's context, which uses window.opener to control the initial tab as its puppet)... etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: