Hacker Newsnew | past | comments | ask | show | jobs | submit | lmc's commentslogin

The title is very misleading. The technique (RTK) depends on using a second receiver, not just a smartwatch on its own.

Title is fine. The key is the plural, smartwatches. From the actual article:

> No geodetic station is involved in this study, as one of the smartwatches serves as the base station.


Oh, that's interesting, I have to read it... Did they set up the smartwatch in a fixed place?

Ah yes, somehow I missed this :-S. Would edit my comment if I could.

> With AI systems, almost all bad behaviour originates from the data that’s used to train them

Careful with this - even with perfect data (and training), models will still get stuff wrong.


Indeed, this has been the most contentious line in the whole piece :D

How do you define "perfect" data and training? I'd argue that if you trained a small NN to play tic-tac-toe perfectly, it'd quickly memorise all the possible scenarios, and since the world state is small, you could exhaustively prove that it's correct for every possible input. So at the very least, there's a counter example showing that with perfect data and training, models will not get stuff wrong.


Having too much parameters in your model, so that all of sample/training data is preserved perfectly, is usually considered a bad thing (overfitting).

But you're right - if dataset is exhaustive and finite, and model is large enough to preserve it perfectly - such overfitted model would work just fine, even if it's unlikely to be a particularly efficient way to build it.


> I'll personally attest: LLM's have been absolutely incredible to self learn new things post graduation.

How do you know when it's bullshitting you though?


All the same ways I know when Internet comments, outdated books, superstitions, and other humans are bullshitting me.

Sometimes right away, something sounds wrong. Sometimes when I try to apply the knowledge and discover a problem. Sometimes never, I believe many incorrect things even today.


When you Google the new term it gives you and you get good results, you know it wasn't made up.

Since when was it acceptable to only ever look at a single source?


That’s the neat part, you don’t!


Same way you know for humans?


But an LLM isn't a human, with a human you can read body language or look up their past body of work. How do you do his with against an LLM


Many humans tell you bullshit, because they think it's the truth and factually correct. Not so different to LLMs.


I honestly don't know how they convince employees to make features like this - like, they must dogfood and see how wrong the models can be sometimes. Yet there's a conscious choice to not only release this to, but actively target, vast swathes of people that literally don't know better.


High paychecks


In my code these days, I have:

TODO

SHOULDDO

COULDDO

The TODOs generally don't make it to main, the others sometimes get picked up eventually.


I only use "TODO" eventually followed by a sub-classification like "TODO bug": it maximizes discoverability by other devs and tools, and allows for a lot of variants (both technical and/or functional) while still permitting a complete scan with a single grep.


I think common terminology for there are FIXME, TODO, XXX in that order, but YMMV



> I will be happy to continue the discussion on what is a good prediction or not. I have mapped a lot of swimming pools myself and edited and removed a lot of (presumably) human contributed polygons that looked worse (too my eyes) than the predictions I approved to be uploaded.

Something else you need to be mindful of is that the mapbox imagery may be out of date, especially for the super zoomed in stuff (which comes from aerial flights). So e.g., a pool built 2 years ago might not show up.

https://docs.mapbox.com/help/dive-deeper/imagery/


This is a general problem when trying to compare OSM data with aerial imagery. I've worked a lot with orthos from Open Aerial Map, whose stated goal is to provide high quality imagery that's licensed for mapping. If you try and take OSM labels from the bounding boxes of those images and use them for segmentation labels, they're often misaligned or not detailed enough. In theory those images ought to have the best corresponding data, but OAM allows people to upload open imagery generally and not all of it is mapped.

I've spent a lot of time building models for tree mapping. In theory you could use that as a pipeline with OAM to generate forest regions for OSM and it would probably be better than human labels which tend to be very coarse. I wouldn't discount AI labeling entirely, but it does need oversight and you probably want a high confidence threshold. One other thought is you could compare overlap between predicted polygons and human polygons and use that as a prompt to review for refinement. This would be helpful for things like individual buildings which tend to not be mapped particularly well (i.e. tight to the structure), but a modern segmentation model can probably provide very tight polygons.


Beautiful... 3rd party dependency exploit thwarted by its own 3rd party dependency.



> Just to clarify, GDPR has nothing to do with cookies.

Not strictly true, they are highlighted as a potential source of PII. https://gdpr.eu/cookies/


But as others have pointed out, the law is extremely technology-agnostic. Sticking the same information in a JWT makes no difference either way.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: