Hacker Newsnew | past | comments | ask | show | jobs | submit | more olegkikin's commentslogin


There's a huge difference between what evolved organisms can adapt to, and the conditions necessary to start life.

Look, humans already adapted to being in space for prolonged periods of time. We adapted to every climate. But drop a naked human in a random spot on our planet, and he will die with high degree of probability. And that's after billions of years of evolution.


Out of which some 3 billion years had to pass before unicellular forms of life evolved into multicellulars.

Who knows if this jump alone should be taken for granted - plus, not many planets could provide 3 billion years of "good weather" (or staying habitable) to start with.


> Out of which some 3 billion years had to pass before unicellular forms of life evolved into multicellulars.

We might be a slow developer and poor representative sample in that respect, but without more data it's hard to tell.

> plus, not many planets could provide 3 billion years of "good weather" (or staying habitable) to start with.

I suspect the opposite will be true, red dwarfs are the most common type of star and are stable for trillions of years. Even if the average time it took multi cellular life was 30 or 300 billions years then it might not matter for life evolving there. They do come with some other caveats though: https://en.wikipedia.org/wiki/Red_dwarf#Habitability


That is the odd thing about the history of life on Earth isn't it? Prokaryote popped up almost as soon as was conceivably possible. And then. Nothing. For billions of years. The plausible explanation is that you need Oxygen for Eukaryotes and that's how long it to change the chemistry of a planet. It just seems weird to me.


"Let me make my position clear. The miracle, and I do not mean it in the religion sense, I mean it in the evolutionary sense, the miracle of the evolution, is the cell. While there are theories involving an RNA world and selforganising, it remains a mystery. Once you had the eukaryotic cell from the point of view of evolution and development it was downhill all the way, very very easy. "

Lewis Wolpert : https://link.springer.com/chapter/10.1007/978-3-0348-8026-8_...


Just a reminder - we were naked for most of our history and were able to survive.


But not in nearly as many emvironments as we can today. We can assume, over time, life learns to live under absurd conditions; but its hardly safe to think it can start in those same conditions.


I think humans are a bad example - we use our own relatively highly developed intelligence to solve problems, and that means we had to feel safe enough from the environment long enough to develop that. But if we look at older organisms, like the tardigrade, there are some incredibly robust creatures out there.


It's not the noise that prevents us from seeing distant planets, but the diffraction limits.

https://en.wikipedia.org/wiki/Diffraction-limited_system



That's a good thing. Low-end GPUs suck anyway, but add so much weight and power consumption.


Once someone goes to the $70 GPU the argument starts to fall off. Sure the sub $50 cards are only worth it for someone that just wants another monitor (BUT buy a display port card and monitors)The value to performance of these $50 cards is low.

Looking at a GTX 1030 this $70 card's performance to value improves greatly. It has the same performance as a AMD 480. https://www.newegg.com/Product/Product.aspx?Item=N82E1681413...


> It has the same performance as a AMD 480.

Yeah I don’t think that’s true: http://gpu.userbenchmark.com/Compare/Nvidia-GT-1030-vs-AMD-R...


Did you mean something else? After a quick look a 480 has around twice the performance. Nearly bought a 1030 ;)


460!!!! I made a mistake :(


> GTX 1030 this $70 card's performance to value improves greatly. It has the same performance as a AMD 480.

Are you confusing it with the AMD RX 460? The 460 is similar to a 1030.

An actual AMD RX 480/580 is far, far ahead of that. It's closer in performance to a GTX 1060.


So is this a good PRNG?

    f(i) = SHA256(i + salt)


Discussion: https://crypto.stackexchange.com/questions/9076/using-a-hash...

& http://xoroshiro.di.unimi.it makes reference to how on x86 cpus with AES one could get a very fast prng similar to how you're implying

As a change of 1 bit in an input ideally flips half the bits of the output for a cryptographic hash, this scheme should work

See also https://en.wikipedia.org/wiki/Fortuna_(PRNG)


I don't know much about this stuff, but I think that this would be considered overkill (and thus too slow) if you don't need crypto-level randomness. But other than that it should be really good.


Random, sure. Uniformly distributed? Not sure.


My understanding is that to the extent it's not uniformly distributed, that's viewed as a cryptographic vulnerability.


A PRNG usually doesn’t have parameters once seeded. What’s i in your scheme?

If you’re asking if you can make a good PRNG out of cryptographic primitives, yes, that’s what happens in the bowels of OS CSPRNGs. But a bit more involved.


No. A feature of a good PRNG is that it should not be predictable. If the salt is weak enough to be cracked, that PRNG can be reversed and predicted.


The salt is the seed. Presumably initialized with /dev/random or whichever system entropy source is available. Any PRNG is weak if the seed is weak


But corporation collections as paperclip maximizers is a bad analogy. Corporations generally produce something large numbers of people want (be it solar panels or cars in his example), whereas a paperclip maximizer AI produces paperclips that only it wants, destroying everything in its path.


The "paperclip" in Stross' talk is not the item being produced, but earnings. If a corporations could massively increase its quarterly earnings by starting a nuclear war, it may well decide to do so.

It is not that the item or service is useful or not in the short term, but that it will pursue a set goal with a single minded conviction.

Another analogy may well be the Sorcerer's Apprentice.


The risk of a paperclip maximiser isn't that the paperclip maximiser wants paperclips, but that the limits of almost any reasonable goal become strongly perverse when one approaches the limits of feasible computation.

Corporations are different because there are very strong diminishing returns on scale when it comes to how smart they can be.


Being a programmer, but not an engineer, I always wanted to try to create a camera with the ability to micro-shift the sensor in both directions, increasing the resolution. If you shift 3 times by 1/3 of the pixel on each axis, you can theoretically 9X the resolution (well, megapixels). Of course, it won't work for dynamic scenes, but could be quite useful for industrial scanning or looking for house insulation problems.


I'm not sure this would work. The pixels of the sensor are capturing a whole square of incoming light, not just at a single point. Moving the camera 1/3 of a pixel over would just cause 1/3 of the light to bleed over to the next pixel, but it wouldn't increase the achieved resolution.

Oh, wait, maybe this could work. There might be a neat way to take the old and new values of new values and calculate what the in-between area's color must have been, based on how much the bigger blocks changed with a small movement? Either way, you'd need to be pretty far from the subject, but it might work for a landscape?


It works in commercial cameras like Hasselblad H4D-200MS.


The pixel-shift of the Sony A7RIII and Hasselblad only works to remove the effect of different pixels having different color filters in a Bayer array, with the result being an image with 4x resolution and each pixel containing full color information.

To perform supperresolution imaging, you need some statistical information, some sort of prior, on the scene being imaged and the processing requirements are not insignificant. Can potentially be done, but not in the simplistic pixel-shift sense that recent cameras advertise.


As I understand it, a lot of the pixel-shifting is to avoid de-bayering effects, not to provide superresolution. I'm more familiar with the A7RIII, so perhaps the Hasselblad does superresolution?

Unwinding sub-pixel structure requires, in general, careful deconvolution algorithms.


Do they use a COTS sensor with clever DSP or do they build a sensor that's actually capable of sampling?


It's called microscanning [1], is temporally sensitive so with a moving scene it can be difficult to stitch a good image together and it lowers your framerate.

[1] https://en.m.wikipedia.org/wiki/Microscanning


Flir app on Android has this feature built-in.

https://play.google.com/store/apps/details?id=georg.com.ther...


That sounds a lot like a mechanical emulation of a light field camera.


Instead of microshifting the sensor, why not offset the input light on the sensor with mirrors?


Nope won’t work, the image is not an orthographic projection.


This absolutely works, but not quite the way GP described. Instead, unit motions on the bayer pattern are used, to the effect that every logical subpixel is sampled by every colour channel (therefore giving you a full pixel). Hence, no demosaicing is required. Hence, higher spatial frequencies can be maintained without incurring aliasing.


It is also possible to actually increase resolution. https://arxiv.org/abs/1706.06266


Got an example of an image made this way? I’d wager it probably looks like crap.


Can you elaborate?


Camera lenses curve light, offsetting on a micro scale produces a slightly different perspective. You would end up with a blurrier image, not a high resolution one.


What do people buy $1000 smartphones for? For gaming I can get a pretty good gaming PC for that amount, or Xbox One X with a VR headset.

I've never seen anybody do anything really cool with their phone - everyone browses the internet, checks the email, does chats/videochats, maps, take photos, and some silly mobile gaming. I do that all on my sub $200 phone.


JS-heavy sites are night-and-day faster on iOS, thanks primarily to faster CPUs (which is mostly due to better L2/L3 caches). The iPhone X has 8MB of L2 cache; your sub $200 phone likely has no L2 at all. Even high end Android phones have, at most, 1MB of L2 cache. This shows up pretty clearly in Speedometer scores.

https://browser.geekbench.com/ios-benchmarks https://browser.geekbench.com/android-benchmarks


Can you post a few links of websites that are painfully slow on low end machines? I would like to test with my Son’s $64 Android vs my iPhone X.


Try browsing the web on your son's phone at all. You'll find that just about every website is slower to begin with.


You are not wrong, but that's like arguing that a Porsche is the only way to drive.


Well, it is.

If you can afford one.

And if you can, can you give me a ride?


I could afford a more expensive car, but I'm very happy with my Honda... and my Android phone.


As devices get faster, developers use more compute budget, leaving other devices in the dust. Developers write code against high-end devices.


Not if their site needs a large audience, e.g. ecommerce.


You would think so. But man, even sites like newegg and amazon would drag terribly on my old phone. And many news sites were sooo painful with all their ads.


My semi-budget android phone has 1.5mb of L2 cache.


Which websites are you talking about? I'm yet to encounter any important ones that are really slow.


> What do people buy $1000 smartphones for?

Consider the iPhone X a smartcamera instead of a smartphone.

Consider the "portraits in low light" at the end of the side-by-side comparison section that Vanessa Hand Orellana and Lexy Savvides did for CNET here:

https://www.cnet.com/news/iphone-x-vs-iphone-8-plus-is-the-c...


If I want a low-light camera, I will buy a DSLR/mirrorless, and no smartphone comes even close to something like Sony A7s (~$1100 used). I used to do commercial portrait photography, so I know.

I also dislike the fake background blur / bokeh trend. I understand why the companies are doing it, but it's insanely hard to implement it right, and no phone has achieved it. Every time there's hair involved, they all fail.


My DSLR is terrible at browsing the web and getting my emails. Don't get me started on trying to play angry birds on it.


Maybe for a specific event, but carrying around a DSLR everywhere you go is no fun. For candid shots, or traveling without carrying around a backpack everywhere, a phone that can take amazing pictures is awesome.


I bought it because $1000 is not a large expense for me and I use my phone all day long so I prefer to have the nicest experience possible (for me that means the best iOS device, for someone else it might mean the best Android device on the market).

For middle class people in the first world (and upper middle class people in China), spending $1000 every 1-3 years is not enough to make a meaningful dent on finances.


Because it's a nice device? Fast, sleek, good camera, stylish etc

Why do people buy nice cars instead of the cheapest, most economical models?


To be honest more expensive car are miles ahead of a cheap one in many ways: usability, drive support systems, safety, etc. After owning fresh Mercedes C-Class i started to fear economical cars much much more than crazy BMW next to me since the last one is most likely will be able to stop in the case of emergency than a cheaper one.


I understand it's "fast", but what do you use that speed for to justify spending so much more money? I understand if you're so wealthy, you're willing to pay $800 more to open Facebook a second faster than me, but I don't get why an average person would do something like that.


Why does anybody buy an $800 graphics card when it will be obsolete in under two years?

Because they plan on dealing with it being outpaced by newer hardware but not by enough to care about upgrading for a generation or two in most cases.

This is why I’m still rocking my iPhone 7+, I don’t know if I’ll upgrade to the 9 when it comes out at this point because the phone still runs great - just like I plan on skipping Volta since I just bought a 1080 Ti and there’s not a game in the world that taxes it enough yet at 1440P.


An $800 graphics card has an immediate use case: high FPS for games at high resolutions and high settings. Which is very important in competitive gaming, for instance.

I don't see such a use case for a $1000 smartphone. What important task does it solve that a $300-400 phone doesn't? A slightly better camera, slightly smoother scrolling, slightly faster web page rendering. None of that impresses me in the slightest, these improvements are marginal.


Camera is hugely important to me, as is web browsing performance - the two tasks I use my phone most heavily for. In fact, these are the only two reasons I jumped from my iPhone 6+ to the 7+. You can find a decent phone for $400 these days, but there's always tradeoffs and I'm not willing to compromise (and to be honest, I have an investment in the Apple ecosystem, so unless I plan on figuring out an entirely new workflow there's not much choice for $400 phones right now).


That's what I don't get.

Which websites are radically slower on iphone 6+ compared to 7+?

And the camera - what use case are you talking about? Taking family photos to post on Facebook? If so, how is that "hugely important"? Taking professional photos?


And yet I'm still rocking my $350 Oneplus after two years.

Future proofing and spending exorbitant amounts of money do not necessarily have to go hand in hand.

It's ok to admit you spent the apple premium because you like apple phones.


Apple will eventually ruin the performance of the 7+ like they did with the ipad 2 and the iphone 6. I would advise you to wait a few months for reports to come in regarding what implications to your ux the next ios update has.

Keep in mind this is intentional destruction of your property.


Yea, there is no evidence they have ever done anything like that. Throttling CPUs so an old phone won't reboot during use is actually improving performance, and you can get rid of throttling with a new battery. Apple should be criticized for poorly communicating the choice users had to make, not for the throttling itself.

And an iPhone 6 only has 1 Gb of RAM, and it's multicore performance is less than half of an iPhone 7, and one quarter of an iPhone X. And an iPad 2 only has 512 Mb of ram, and it's multicore performance is one quarter that of an iPhone 6, and around 1/18th of an iPad Pro.

It's basically impossible to add useful new OS features without impacting a three year old devices performance, so the only choice is whether to withhold those features from older phones or accept the tradeoffs. And you control whether you accept those features by accepting the upgrade.


I don't care about new bells and whistles. How about this, your update makes my ux poor, let me go back to the previous version when that os was selling that phone.


I used to like sub $200 phones as well. However, I prefer something more expensive for the better build quality. However, not $1000 more expensive.


A phone is used hours every day. If you buy a $200 phone, it costs around 20 cents per hour of use. If you buy an iPhone X, it’s goung to cost around 50 cents per hour of use.

The $20@ phone is worth zero on 2 years, the iphone will retain about half its value.


GearVR is really good for watching movies on flights.


I'm somewhat sure we will turn trans-human in the next 50 years, and then all bets are off. There may not be humans, not because we die off, but because we turn ourselves into something better, totally different.

1000 years is a long long time to make any predictions. Predicting 10 years into the future is hard enough.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: