Hacker Newsnew | past | comments | ask | show | jobs | submit | more potato-peeler's commentslogin

> When looking at glass in real life, your left eye and your right eye see slightly different refraction patterns since they're looking at the surface from slightly different angles

But if you close one eye, you can still make out the depth. Brain is still able to tell what is glass, what is on top of glass or below it.


Only across time via parallax motion and depth refocusing, neither of which are available on the screen. And both of those signals are extremely secondary to binocular sight. There's a reason that people with strabismus lose depth perception. Their point stands.

(Though Apple could technically do a parallax effect by face tracking if they wanted)


> There's a reason that people with strabismus lose depth perception.

Still we don't stumble onto things nor do we fail recognise what is on a glass vs inside. Even if we do not have binocular depth perception, we actually perceive depth irl just fine.

And people with binocular vision also fall for depth illusions just fine, too. The brain does a lot of predictive processing. It would be too inefficient to be constantly relying on such details for basic tasks.


> Even if we do not have binocular depth perception, we actually perceive depth irl just fine.

I don't think it's everyone of us because I struggle somewhat with pouring things into small openings (eg refilling a small bottle from a bigger one) and most ball games (tennis, table tennis) are difficult.

I don't think it makes depth perception a problem, but I think it's unarguable mine isn't as good as the people I know with binocular vision.


I don't know about your experience or situation, but a confound is that usually people with strabismus have bad eyesight in other aspects in general too. Usually, developing strabismus is the result of other issues with eyesight. The obvious confound is basically having only one good eye to use at a time, and thus also less neural pathways developed and utilised than those who use 2 eyes. This could make visual perception tasks like tracking a fast moving ball harder regardless of the actual role of depth perception in it. There could be tasks where reliance on perceptual cues for depth perception is less effective, but I wouldn't think a moving ball is that kind of task.


You might well be right. My eyes are definitely not great on top of the strabismus and lack of binocular vision.

One of the main issues with tracking things is focus switching from one eye to another based on where it's moving.

That said I do think the issues with pouring things is more of a depth perception issue. I basically have to switch focus from one eye to another to be satisfied I'm aligned where I want to be.


Pouring things sounds more like sth that could be a depth perception issue, true, though I never actually noticed that for myself. I believe I find it harder than usual passing a thread through a needle though because of depth perception issues.


It's good that you don't have trouble getting through life, but "just fine" is not a measurement. Lacking binocular convergence inarguably diminishes perception even if not 100% gone completely.


Measurements actually support that [0]. I am pretty sure you could devise some scenarios where individuals with strabismus do not perform as well, but for most irl scenarios there is no difference. Compensatory mechanisms do the job just fine, and even those with normal eyesight do not rely solely on binocular convergence either. Our brains don't usually rely on a single signal to make sense of the world, and predictive processing plays a huge role for constructing the image of the world around us, which is also why depth illusions work. Even for those with normal binocular convergence, its contribution for making sense of depth is prob smaller compared to other perceptual cues.

[0] Zlatkute et al 2020, Unimpaired perception of relative depth from perspective cues in strabismus. R. Soc. Open Sci. 7: 200955. https://doi.org/10.1098/rsos.200955


That article does not seem to support your point. They're not measuring depth perception, they're measuring whether people with strabismus have managed to learn perspective cues in 2D images, and, in fact, the article explicitly states agreement with the point you're arguing against.

> Strabismus disrupts sensory fusion, the cortical process of combining the images from the two eyes into a single binocular image [3–6]. The main perceptual consequences of lack of fused binocular images is diplopia (double vision) and a lack of binocular depth perception.

Just because those with strabismus can use monocular cues to inform them of relative depth does not mean that they have the same level of depth perception as those with normal binocular convergence.

The best example of this is sports, but as another example I'm legally disallowed from driving an articulated vehicle -- for what I personally think is a pretty good reason. Anecdotally, compared to friends and family my depth perception is dogshit.


You quote:

> Strabismus disrupts sensory fusion, the cortical process of combining the images from the two eyes into a single binocular image [3–6]. The main perceptual consequences of lack of fused binocular images is diplopia (double vision) and a lack of binocular depth perception.

I am speaking specifically about whether people with strabismus have issues with depth perception or not. Obviously "strabismus disrupts sensory fusion" as you do not combine the input of the 2 eyes, and obviously this is a problem outside of depth perception. Moreover, most people with strabismus have bad eyesight more generally, as a common path to develop strabismus is having one eye much worse than the other. I am not saying strabismus is not an issue, I am saying that people with strabismus can still develop normal levels of depth perception in most irl situations by compensating with perceptual cues.

The article specifically tests whether people with strabismus had problems developing depth perception. If binocular depth perception was necessary for developing depth perception, they would have found that people with strabismus have impaired depth perception with 2d images. They didn't.

Again as I wrote to the other commenter before, I do not know about your situation, but I am curious about how you compare depth perception specifically with your friends and family. Having problems wrt visual perception does not mean that "lack of depth perception" is the issue. Using only one eye at the time is a huge issue by itself that makes vision harder, and a huge confound to control for in such comparisons.


they do parallax effect for some things, but not for all liquid glass widget (it would be interesting but probably too much)



Can it be fine tuned for a specific task?


Yes, you can fine-tune a model for any task, what do you have in mind?


> Americans should be happy that the US government is the biggest player. Would you prefer to have China or Russia or the Middle East be the biggest player?

People talk about US as if it’s some kind of lala land! Every country, every person should take active measures to protect itself from US influence.


At least the axe is one of us!


Does not work in older browsers, iOS 13.6

Strangely latest bootstrap works just fine.


Slightly tangential question - what gui framework did you use to build the apps?


This may not be directly related to llm but I am curious about two things -

1. How do llm/rag generate an answer given a list of documents and a question? I can do bm25 to get a list of documents, but post that what is logic/algorithm which generates answers given those list?

2. For small models like this, how much data you need to fine tune for a specific use case? For eg, if I need this model to be knowledgable about html/css, then I have access to lot of documentation online that I can feed it. But if it is very specific topic, like types of banana, then it may be only a couple of wikipedia pages. So is fine tuning directly dependant on the quantity of data alone?


short answer is that in rag systems the documents are chunked into some predefined size (you can pick a size based on your use-case) and the text is converted into vector embeddings (e.g. use the openai embed API) and stored in a vector database like chroma or pinecone or pg_vector in postgres

then your query is converted into embeddings and the top N chunks are returned via similarity search (cosine or dot product or some other method) - this has advantages over bm25 which is lexical

then you can do some processing or just hand over all the chunks as context saying "here are some documents use them to answer this question" + your query to the llm


> then you can do some processing or just hand over all the chunks as context saying "here are some documents use them to answer this question" + your query to the llm

This part is what I want to understand. How does the llm “frame” an answer?


I guess you could just try an equivalent in chatgpt or gemini or something. Paste 5 text files one after the other in some structured schema that includes metadata and ask a question. You can steer it with additional instructions like mention the filename etc etc.


> The overall disk usage for trixie is 403,854,660 kB (403 GB)

What does this mean? If all 69k+ packages are installed, it will take up this much space?


As this also lists lines of code, it sounds more like sources plus packages. Think space that a full mirror (src + generic + arch specific packages) would need.


Indeed, this is the amount of space that a Debian mirror would need to host all Trixie packages. So it's the compressed packages total size, not the space it would take to have all packages installed simultaneously (which also happens to be impossible, because of package conflicts/alternatives and Debian supporting 7+ different architectures).


The 403GB figure represents the total size of all source packages in the Debian archive, not the disk space required for a typical installation which is usually under 10GB for a desktop system.


It seems to use c99, I would like to know from folks who have read this book, for learning newer features in C, what do you recommend?

Also, does this book also teach some of the pitfalls of stdlib like gets or scanf?

Basically I always read in HN/lobsters/other forums that modern C is just as safe as newer languages but I don’t see much book or tutorials teaching modern concepts or even industry standards like Misra?


> Basically I always read in HN/lobsters/other forums that modern C is just as safe as newer languages

This is absolutely false. Modern C++ might get close (although I am not too familiar with it) but C has not gotten much safer over the years (one might argue it got even less safe due to threads being added in C11).

As for learning about modern versions of C: you can read cppreference[0] which has fairly accurate information on the standard. Not much has been added over the years and C17 is merely a minor correction, so there has really only been 2 new versions since C99.

[0]: https://en.cppreference.com/w/c.html


I specifically mentioned Misra standards for writing safe C. And ensuring thread safety is not just a C Lang pitfall.

I was not looking for language reference rather how current software written in C ensure memory safety. How do their development paradigm look like?

Teaching both the software pitfalls in C and how to mitigate them, maybe by giving examples from modern software or standards like Misra, will really help someone to understand how to write better C. Just linking a language reference does not help.


I misunderstood the question then, since I thought it was about learning about the newer language features.

C coding standards differ quite a lot, and unfortunately I can't point out any software that has standards developed for safety.

As to still try and be helpful: I've enjoyed reading nullprograms blog[0] where he looks into interesting ways to make C more useable (although he tends to replace the entire standard library for reasons highlighted here[1])

There is some must avoid functions (mostly string functions) and a good list of them has been compiled in the git project repository[2].

I've tried looking up the misra C guidelines but they were hard to find or (if I remember correctly) had to be paid for or something like that

[0] https://nullprogram.com [1] https://nullprogram.com/blog/2023/02/11/ [2] https://github.com/git/git/blob/master/banned.h


For day to day use, what are the benefits for gnu guix? From it’s website, what I could understand is it provides installation of different version of the same package, similar to rbenv or conda. Apart from this, is there anything else that will be considered useful over something like aptitude?


Reproducibility, just like Nix.

You can be certain that, if you've managed to get a piece of software running with Guix, you can also get it running identically on any other machine.


Except if nvidia cards and embedded systems are involved. Then whether you get it running is still a gamble.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: