Hacker Newsnew | past | comments | ask | show | jobs | submit | onnimonni's commentslogin

Slightly off topic but I would really want to see DuckDB based alternative of https://pgrouting.org.

It's so easy to embed duckdb anywhere. Current smartphones already have enough CPU juice to handle almost anything and duckdb can query and cache geoparquet files eg from the Overture maps.


Wow, thanks for mentioning https://streetcomplete.app! This looks very intuitive to use for edits on openstreetmaps.

Would someone here know a similiar tool for iOS or MacOS? Or any recommendations to edit roads.

We are currently driving with a 4.5 tonne motorhome in Europe and the road weight and height limits are usually marked properly in osmand+ but when they are not we waste multiple hours rerouting in the alps and I would really want to help the next person in similar situation.


There’s been work put in to making this happen, but now EU have also given funding for it to making it Multiplatform: https://nlnet.nl/project/StreetComplete-multiplatform/


Mentioning it just in case, but openstreetmap.org's web editor (iD) is a good start on Desktop.

There's also EveryDoor [1] which is very nice to edit OSM and they do seem to have an iOS version. Depending on what you want to edit, it can be very handy.

I have not tried the numerous other, more advanced options [2].

[1] https://every-door.app/

[2] https://wiki.openstreetmap.org/wiki/Editors


Go Map, or just make bookmarks in OsmAnd and go back later


"Go Map!!" was indeed pretty easy to use. Thanks!


Another vote for Go Map!! Developer is ex Microsoft sysinternals guy. I have been a beta tester for years. Love the app for quick edits in the field.

https://wiki.openstreetmap.org/wiki/Go_Map!!

https://apps.apple.com/us/app/go-map/id592990211


Is there anything like this but for MacOS?


I think the closest is an app with a full GUI called LittleSnitch. It's pretty impressive.

https://www.obdev.at/products/littlesnitch/index.html


Would someone with more experience be able to explain to me why can't these operations be "safe"? What is blocking rust from producing the same machine code in a "safe" way?


Rust's raw pointers are more-or-less equivalent to C pointers, with many of the same types of potential problems like dangling pointers or out-of-bounds access. Rust's references are the "safe" version of doing pointer operations; raw pointers exist so that you can express patterns that the borrow checker can't prove are sound.

Rust encourages using unsafe to "teach" the language new design patterns and data structures; and uses this heavily in its standard library. For example, the Vec type is a wrapper around a raw pointer, length, and capacity; and exposes a safe interface allowing you to create, manipulate, and access vectors with no risk of pointer math going wrong -- assuming the people who implemented the unsafe code inside of Vec didn't make a mistake, the external, safe interface is guaranteed to be sound no matter what external code does.

Think of unsafe not as "this code is unsafe", but as "I've proven this code to be safe, and the borrow checker can rely on it to prove the safety of the rest of my program."


Why does Vec need to have any unsafe code? If you respond "speed"... then I will scratch my chin.

    > For example, the Vec type is a wrapper around a raw pointer, length, and capacity; and exposes a safe interface allowing you to create, manipulate, and access vectors with no risk of pointer math going wrong -- assuming the people who implemented the unsafe code inside of Vec didn't make a mistake, the external, safe interface is guaranteed to be sound no matter what external code does.
I'm sure you already know this, but you can do exactly the same in C by using an opaque pointer to protect the data structure. Then you write a bunch of functions that operate on the opaque pointer. You can use assert() to protect against unreasonable inputs.


Rust doesn't have compiler-magic support for anything like a vector. The language has syntax for fixed-sized arrays on the stack, and it supports references to variable-length slices; but it has no magic for constructing variable-length slices (e.g. C++'s `new[]` operator). In fact, the compiler doesn't really "know" about the heap at all.

Instead, all that functionality is written as Rust code in the standard library, such as Vec. This is what I mean by using unsafe code to "teach" the borrow checker: the language itself doesn't have any notion of growable arrays, so you use unsafe to define its semantics and interface, and now the borrow checker understands growable arrays. The alternative would be to make growable arrays some kind of compiler magic, but that's both harder to implement correctly and not generalizable.

> you can do exactly the same in C by using an opaque pointer to protect the data structure. Then you write a bunch of functions that operate on the opaque pointer. You can use assert() to protect against unreasonable inputs.

That's true and that's a great design pattern in C as well. But there are some crucial differences:

- Rust has no undefined behavior outside of unsafe blocks. This means you only need to audit unsafe blocks (and any invariants they assume) to be sure your program is UB-free. C does not have this property even if you code defensively at interface boundaries.

- In Rust, most of the invariants can be checked at compile time; the need for runtime asserts is less than in C.

- C provides no way to defend against dangling pointers without additional tooling & runtime overhead. For instance, if I write a dynamic vector and get a pointer to the element, there's no way to prevent me from using that pointer after I've freed the vector, or appended an element causing the container to get reallocated elsewhere.

Rust isn't some kind of silver bullet where you feed it C-like code and out comes memory safety. It's also not some kind of high-overhead garbage collected language where you have to write unsafe whenever you care about performance. Rather, Rust's philosophy is to allow you to define fundamental operations out of small encapsulated unsafe building blocks, and its magic is in being able to prove that the composition of these operations is safe, given the soundness of the individual components.

The stdlib provides enough of these building blocks for almost everything you need to do. Unsafe code in library/systems code is rare and used to teach the language of new patterns or data structures that can't be expressed solely in terms of the types exposed by the stdlib. Unsafe in application-level code is virtually never necessary.


Those specific functions are compiler builtin vector intrinsics. The main reason is that they can easily read past ends of arrays and have type safety and aliasing issues.

By the way, the rust compiler does generate such code because under the hood LLVM runs an autovectorizer when you turn on optimizations. However, for the autovectorizer to do a good job you have to write code in a very special way and you have no way of controlling whether or not it kicked in and once it did that it did a good job.

There’s work on creating safe abstractions (that also transparently scale to the appropriate vector instruction), but progress on that has felt slow to me personally and it’s not available outside nightly currently.


    > However, for the autovectorizer to do a good job you have to write code in a very special way
Can you give an example of this "very special way"?


For example many autovectorizers get upset if you put control flow in your loop


often the unsafe code is at the edges of the type system. e.g. sometimes the proof of safety is that someone read the source code of the c library that you are calling out to. it's not useful to think of machine code as safe or unsafe. safety often refers to whether the types of your data match the lifetime dataflow.


It seems pretty awful that the de-facto way to use GitHub Actions is using git tags which are not immutable. For example to checkout code [1]:

- uses: actions/checkout@v4

Github does advise people to harden their actions by referring to git commit hashes [2] but Github currently only supports SHA-1 as hashing algorithm. Creating collisions with this hashing algo will be more and more affordable and I'm afraid that we will see attacks using the hash collisions during my lifetime.

I wish that they will add support for SHA-256 soon and wrote product feedback regarding it here: https://github.com/orgs/community/discussions/154056

If this resonates with you please go and give it a thumbs up :)

[1]: https://github.com/actions/checkout?tab=readme-ov-file#usage

[2]: https://docs.github.com/en/actions/security-for-github-actio...


> ... SHA-1 ... Collusions ... will be more and more affordable.

I can put your fears on that account to rest. At current trajectory, that's not gonna happen.

While a collision has been successfully produced, that's a very far milestone away from creating a specific collision with a payload you actually want to deliver with reasonable size so any sanity check such as a multi GB file size wouldnt "accidentally" detect it through timeouts in CI or similar.

This is so far beyond our current technological capabilities and Moore's law hasn't been active for over a decade now. Sure, we've had astounding success in the GPU space, but that's still not even remotely close to the previous trajectory while on Moore's Law.


I wasn't aware of the already existing SHA-1 collision support created by Github. It's very interesting read and AFAIK it seems that using SHA-1 collisions is not possible:

https://github.blog/news-insights/company-news/sha-1-collisi...

Is anyone aware of a git hook I could use to analyse my .github/workflows/*.yml files and replace git tags like "v4" with the current git commit hashes?

I think this would make it much safer to use 3rd party GitHub Actions.


That's the sort of hook you should be able to write yourself pretty quickly. So I threw your comment into o3-mini-high and it gave me a decent-looking solution. Decent but wrong, since it thought "current git commit" referred to the project repo, rather than the referenced dependency.

Anyway here's the gist of a solution without any of the necessary checking that the files actually exist etc.

  #!/bin/sh
  for file in .github/workflows/*.yml; do
    grep -E "uses:[[:space:]]+[A-Za-z0-9._-]+/[A-Za-z0-9._-]+@v[0-9]+" "$file" | while read -r line; do
      repo=$(echo "$line" | sed -E 's/.*uses:[[:space:]]+([A-Za-z0-9._-]+\/[A-Za-z0-9._-]+)@v[0-9]+.*/\1/')
      tag=$(echo "$line" | sed -E 's/.*@((v[0-9]+)).*/\1/')
      commit_hash=$(git ls-remote "https://github.com/$repo.git" "refs/tags/$tag" | awk '{print $1}')
      [ -n "$commit_hash" ] && sed -i.bak -E "s|(uses:[[:space:]]+$repo@)$tag|\1$commit_hash|g" "$file" && git add "$file" && rm -f "$file.bak"
    done
  done
  exit 0


Thanks! Today I learned:

$ git ls-remote "https://github.com/$repo.git" "refs/tags/$tag"

Even though the grep and sed are not very readable this was very useful way to avoid yet another tool!


Thanks for creating and sharing this :) Commenters here can be pretty awful so I wanted to say that I really enjoy using the tools you have created.


What I'm really missing still is a cli to iCloud stored passwords. AFAIK 'security' cli can't access the credentials stored in the cloud. This would be helpful to store secrets outside of git but would still allow scriptable access to them similarly as 1password cli 'op' has.


I'm storing a lot of text documents (.html) which contain long similiar sections and are thus not copies but "partial copies".

Would someone know if the fast dedup works also for this? Anything else I could be using instead?


Matt you propably don't remember me but we met briefly on WordCamp Vienna 8 years ago. I was hugely inspired by you for many years and still was until few weeks ago.

It's not too late to stop this madness.


Using a Starlink dish would propably have been cheaper?

Upload speeds are not as good as download but 2 terabytes should be uploaded in 2 weeks and 4 days with Starlink 10Mbps.

If you want to be very cheap you could do this in your 30 day trial period and return the dish afterwards but I'm assuming you have similiar upload needs often so keeping the dish is propably better.


Test comment please ignore


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: