Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

gluing together third party stacks like there's no tomorrow doesn't make you better evaluating how a vision-only ML model with limited, yet uniquely hardware-accelerated, one-task actual real life, real-time ML model running right now would integrated with foreign data, especially as the actual relevant engineers have directly talked against such things.

It might make you headstrong in believing against something that'd be easier to see the core sensibility of if you weren't so invested in just a specific corner/angle though.

I've avoided working a work project involving LIDAR scanning before, even back then the hellishness of the hardware was a large factor. I wouldn't mind playing around with a Jetson Nano though.



>gluing together third party stacks like there's no tomorrow

That is why I don't really like ROS. lol =3

>doesn't make you better evaluating how a vision-only ML model

In general, the monocular SLAM algorithms rely on salient feature extraction, and several calibrated assumptions about the camera platform. How you interpret that output is another set of issues, as the power budget is going to take the hit.

For machine vision, I'd skip the proprietary Jetson Nano... and get a cheap gaming "parts" laptop with a broken LCD and several USB ports (RTX4090 or RTX4080 is a trophy.)

No one wants to fork over $30k for an outdoor lidar, but using only cameras is a fools errand. The best platforms I've seen commercially use camera + lidar + radar.

For student projects, one can get small radars and TOF sensors for under $20 off sparkfun (similar to the one in iPhone Pro 11/12/13). We live in the future... =3




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: