This is described as being related to 'NitroTPM', which is described as a 'virtual device [...] which conforms to the TPM 2.0 specification'.
Ordinarily, the way you do attestation with a TPM is to perform a TPM quote operation, which provides a TPM-signed attestation of a set of PCRs.
This is... not that. This appears (judging by looking at the source code for some of the provided tools) to be invoking an undocumented(?) vendor-specific command on the TPM device which appears related to the previous Nitro Secure Enclaves support. When AWS introduced Nitro Secure Enclaves, they came up with their own signed attestation document format rather than reusing the TPM standard, then added support for that format to KMS.
The documentation also states: "An Attestation Document is generated by the NitroTPM and it is signed by the Nitro Hypervisor."
This seems to suggest that different entities are responsible for generating AWS's Attestation Document and for signing it, which seems rather odd. What's going on here?
If AWS's claims that NitroTPM is TPM 2.0 compliant are true, it seems like there are now basically two completely different APIs for remote attestation exposed via the TPM virtual device: TPM 2.0 Quote operations and AWS Attestation Documents via an undocumented(?) vendor-specific API.
I can understand wanting to take this approach for consistency with their previous Nitro Secure Enclaves API but it's at the expense of consistency with the existing TPM 2.0 standard. Presumably if you want to use KMS with this you have to use their Attestation Document format rather than a TPM 2.0 Quote, forcing you to use a vendor specific API with it.
Just some thoughts, and this is just my quick impression. I could be mistaken. In any case, having this functionality with a viable KMS tie-in is certainly valuable, so it's nice to see in the sense that you no longer have to create a Nitro Secure Enclave to get this kind of functionality if you don't need the dual compartments separated via a vsock.
Most webapps would be dramatically improved if the developers were banned from using any JavaScript on the client side in the first version, and allowed only to apply progressive enhancements from there.
The aim of <selectedoption> is to provide a DOM placeholder to contain a clone of the contents of the selected <option>. It isn't there as a style hook for the selected option in the popover - that already exists via option:selected.
You aren't the first to get mixed up here. Personally I think <selectedoption> is a misleading name. I wish it was called something like <selectedcontent>, but I don't know if that is much better https://github.com/openui/open-ui/issues/1112
With every new PostgreSQL release we see yet more features and sugar added to the frontend, yet seemingly no meaningful improvement to the backend/storage layer which suffers these fundamental problems.
I wish the PostgreSQL community would stop chasing more frontend features and spend a concerted few years completely renovating their storage layer. The effort in each release seems massively and disproportionately skewed towards frontend improvements without the will to address these fundamental issues.
It's absurd that in 2024, "the world's most advanced open source database" doesn't have a method of doing upgrades between major versions that doesn't involve taking the database down.
Yes, logical replication exists, but it still doesn't do DDL, so it has big caveats attached.
The design of good storage layers in databases is deeply architectural. As a consequence, it is essentially a "forever" design decision. Fundamentally changing the storage architecture will alter the set of tradeoffs being made such that it will break the assumptions of existing user applications, which is generally considered a Very Bad Thing. The existing architecture, with all its quirks and behaviors, is part of the public API (see also: Hyrum's Law).
In practice, the only way to change the fundamental architecture of a database is to write a new one, with everything that entails.
> a method of doing upgrades between major versions that doesn't involve taking the database down.
For large instances this is a big ask, especially of a project without single person in charge. MySQL does have better replication, yet still often requires manually setting that up and cutting it over to do major version upgrades.
Rearchitecting the storage layer takes time. A storage manager API didn't even exist until fairly recently, maybe 14. That API needs to undergo changes to account for things that Oriole is trying to do. Postgres is not developed by a team. It's a community effort, and people work on what they want to work on. If you're interested in a previous attempt to change the storage layer, you can learn about what happened with zheap[0].
Better yet: decouple front and back ends. Let them talk over a stable interface and evolve independently. The SQLite ecosystem is evolving in this direction, in fits and starts.
They might not be strictly necessary but they tend to solve more issues than they cause. Many languages which don't come with a centralized repository built in will tend to spontaneously gain one because they are in fact useful.
"Drunk driving may kill a lot of people, but it also helps a lot of people get to work on time, so it's impossible to say if it's bad or not." - Internet person
This is exhausting.
Arguments like this don't persuade anyone—in fact, they do the opposite. They just highlight that you don't have any reasonable points to make, and are left relying on unfalsifiable and absurd claims.
What's exhausting is your analogy, drunk driving does in fact cause more problems than it solves. Make a reasonable point yourself first before saying I don't have one.
"Unnecessary" is a very strong word and even Go has a centralized module proxy nowadays. So there is an obvious benefit of centralization here, and you should explain what and how much is the overside to give it up.
This feels like a new genre of hardware hacking to me, where someone is motivated to make a device out of compassion for their family or others. It reminds me of this instance where someone designed their own peristaltic pump to ensure their grandfather can eat:
I seem to recall another similar device to this posted on HN also, but with audiobooks.
On an unrelated note, the modern digital age does deprive me of my longtime love of removable media, whether analogue or digital. There's a mechanical satisfaction in having a physical token which is decisively inserted into something. USB drives just don't have the kinetic enjoyment of a floppy disk or tape. (Clearly the next iteration of the OP's design needs a motorised NFC card loader, ATM-style. ;))
Not free, but the "Technic", "Simplex" and "ISOCP" fonts included with AutoCAD are also of this aesthetic, if people want an exhaustive list of candidates.
For single-stroke (AKA "routed") fonts of various aesthetics, look up SHX font files. I'm not sure what the license status is, but they're easy to find online. I use them for laser cutting.
Personally I've always considered it bad hygiene to commit generated outputs, but this article notes that this takes on a new significance in the light of supply chain security concerns. Good changes from PostgreSQL here.
Generated output, vendored source trees, etc. aren't, or can't be, meaningfully audited as part of a code review process, so they're basically merged without real audit or verification.
My personal preference is never to include generated output in a repository or tarball, including e.g. autoconf/automake scripts. This is directly contrary to the advice of the autotools documentation, which wants people to ship these unauditably gargantuan and obtuse generated scripts as part of tarballs... an approach which created an ideal space for things like the XZ backdoor.
My take is that they should always be committed, but never generated by the dev, instead generated and pushed when necessary by CI. The problem with generating those files yourself is that, in many cases, it makes the output nondeterministic and nonreproducible. In the ideal world those tools would just generate those files deterministically, but until then for me committing them from CI is an acceptable stopgap
My preference is to do both. Have them generated by a dev, committed, and also generated in CI. The latter gets compared with the checked in contents to ensure the results match the expected value.
This speeds up CI (the generation path can be done in parallel) and most local development.
The one catch is that it relies on mostly trusting whoever has a commit bit. But if you don’t have that and any part of the build involves scripts that are part of the repo itself, then you’ve already lost.
My preference is to do both. Have them generated by a dev, committed, and also generated in CI. The latter gets compared with the checked in contents to ensure the results match the expected value.
Bingo. This is what I am working towards convincing people to adopt at my current job. It's a long road.
The generation routine bits would be highly specific to the project, but the final check in CI is as simple checking the git diff/status of the generated targets to see if they match the ref. Any deviance indicates that it’s been missed by the patch submitter (likely inadvertently in the case of honest actors).
The real work is being able to transform the generation task into a reproducible step that be run consistently anywhere. Containerizing those steps can help but it’s not strictly required nor is it enough if the “inputs” are a non-seeded random or the current time.
That's not the case for autotools output, or flex and bison output.
If the generated files are what you say? Well, just embed the generation step into the build system. A simple approach like that is easily made reproducible, and we avoid introducing noise into the repository.
> an approach which created an ideal space for things like the XZ backdoor.
That's not entirely correct. Indeed there was a part of the xz backdoor that lived in the configure script. However, that part was also included in the sources of the configure script as found in the tarball (and not in the git archive).
Thus regenerating the configure script didn't help, but regenerating the tarball did.
They are not and never did commit generated files (as far as I can tell). Their release process used to generate some files and place that into a distribution file, but that file was never committed anywhere.
If you make a large but simple refactoring, like renaming a frequently-used function across a large repo, nobody is going to audit that diff and check for extra changes.
Things don't have to be this way, Google's source control systems apparently has tools that can do such refactorings for you in a centralized fashion, and one could make something like that for git.
Going to the extreme of this though, I really really hate getting an autoconf project with no generated configure file. I don’t want to install the full autotools suite to do build!
On the other hand, keeping tarballs close to the git tree makes it easy to reuse git archive and related GitHub features, provided the repo properly includes some kind of versioning information in tree.
Linux software sources are in a weird spot between users and developers.
I, as a developer, organize sources in a way that make it easy to work for another developer. My software will never be compiled by any user. All my users use build artifacts.
I might consider adding autogenerated code, but only when I'm like 99% sure that this code won't ever change. For example that's the case for integration with many organizations where WSDLs are agreed upon once and then never touched. Having Java sources regenerated every build just adds few seconds to every build time without noticeable advantages.
The fact that some Linux users prefer to build software from the sources and at the same time do not want to install necessary build tools is a bit strange situation.
May be containers should be better utilized for this workflow. Like developer supplies Dockerfile which builds a software and then copies it to some directory. You're running `docker build .` and they copying binary files from the container to the host.
Including autoconf outputs servers to avoid having to have autoconf installed. Because autoconf installs historically lagged behind what autoconf-using projects wanted, this used to be a problem. Nowadays it's not that big a deal.
As u/nrabulinski says, you can have the CI system generate and commit (with signed commits) autoconf artifacts.
Foobar2000 is parasitic in the sense that many of the plugins that give foobar2000 its value are open-source ports of open-source software, yet the foobar2000 software that hosts the plugins is proprietary.
Feels like when Disney makes a movie version of a public domain folktale and then lobbies to perpetually extend the copyright on it.
plugins were great. Measured the speakers at my desk (I built them). generated an inverse impulse response filter, and fed it through a plugin to do full frequency equalization. It was a fun project to play with full range speakers that had no passive filter network whatsoever, all done via software.
The problem (at least for me) is input format support.
Assorted foobar2000 plugins support every obscure tracker format, every obscure video game music format (.vgz, etc.), and then foo_midi lets you render MIDIs not just with Soundfonts but with whatever VSTi DLLs you like. Also support for music files in ZIP files as well as music files in ZIP files in ZIP files (don't ask). That's hard to compete with.
> Here in Unix I can just mount archives and disk images.
This is so true! And it's much easier than just having the music player support zip files. Especially for zip-in-zip like GP described. Can you imagine double clicking an archive and have it play, rather than simply do:
cd Downloads/
ls
mkdir tmp
mount-zip myfile.zip tmp
ls tmp
mkdir tmp2
mount-zip tmp/myinnerfile.zip tmp2
audacious --new-instance tmp2 --play
while killall -0 audacious; do
sleep 1
done
umount tmp2
umount tmp
rmdir tmp2 tmp
Are there better MCUs on the market for these applications (other than massively expensive actual 'rad-hardened' space chips)? You can get MCUs with lockstep dual CPUs, for example (TMS570 etc., and I assume the automotive sector has loads of stuff).
This is described as being related to 'NitroTPM', which is described as a 'virtual device [...] which conforms to the TPM 2.0 specification'.
Ordinarily, the way you do attestation with a TPM is to perform a TPM quote operation, which provides a TPM-signed attestation of a set of PCRs.
This is... not that. This appears (judging by looking at the source code for some of the provided tools) to be invoking an undocumented(?) vendor-specific command on the TPM device which appears related to the previous Nitro Secure Enclaves support. When AWS introduced Nitro Secure Enclaves, they came up with their own signed attestation document format rather than reusing the TPM standard, then added support for that format to KMS.
The documentation also states: "An Attestation Document is generated by the NitroTPM and it is signed by the Nitro Hypervisor."
This seems to suggest that different entities are responsible for generating AWS's Attestation Document and for signing it, which seems rather odd. What's going on here?
If AWS's claims that NitroTPM is TPM 2.0 compliant are true, it seems like there are now basically two completely different APIs for remote attestation exposed via the TPM virtual device: TPM 2.0 Quote operations and AWS Attestation Documents via an undocumented(?) vendor-specific API.
I can understand wanting to take this approach for consistency with their previous Nitro Secure Enclaves API but it's at the expense of consistency with the existing TPM 2.0 standard. Presumably if you want to use KMS with this you have to use their Attestation Document format rather than a TPM 2.0 Quote, forcing you to use a vendor specific API with it.
Just some thoughts, and this is just my quick impression. I could be mistaken. In any case, having this functionality with a viable KMS tie-in is certainly valuable, so it's nice to see in the sense that you no longer have to create a Nitro Secure Enclave to get this kind of functionality if you don't need the dual compartments separated via a vsock.