Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We have a tool at work, that is forked from https://github.com/onetrueawk/awk, for the binary serialization of one of our proprietary in-house protocols, which has hundreds of record types.

I am blown away by how elegant the result is.



Neat! I'm curious -- would it be possible to share a few (non-sensitive!) details about how this works / what this looks like?


So essentially the binary records are tagged with a record type, which is used by the tool (via a library maintained by another team) to provide metadata (field names, types, ordering etc) for that record type.

Human meaningful field names are made available when records are processed in your awk expressions. This is an improvement over using a delimited text dump of the record and using regular awk with meaningless $1, $2 variables.

I could be wrong, but I believe the relational operators also recognize common record field types like dates and timestamps that a text dump + regular awk couldn't.

The output of the tool is always a human readable serialization.


Not the parent, but this is what the vnl-filter tool in the vnlog toolkit does: https://github.com/dkogan/vnlog

It is indeed mind-blowingly useful in countless contexts. Disclaimer: I'm the author




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: