I believe,
though I can't find an unambiguous reference,
that a tool or user is allowed to define and populate any XMP/Exif attribute it wants,
within the technical limits of attribute/value definition of the relevant metadata.
Whether or not anything else can read and make sense of that metadata is a different problem.
draw.io probably . It's pretty good for a free tool. If you lost the xml I think it even work to import directly an exported picture (I guess they append the data into it too).
It's quite common for server-side web frameworks to send a single request through their stack multiple times / especially when there is any form of "middleware" concept involved.
Often there's a need to skip some
middleware on the second or third time through.
I've built systems in the past that do all sorts of re-dispatching.
One example: in development my API might live at /api/... but in production I might use api.my.site - with middleware that detects that host, rewrites the incoming request to add that /api/ prefix and then runs it though the stack again.
Authentication is a very common way this pattern is applied - check cookies / authorization headers / whatever, then add the authenticate user to the request somehow and re-dispatch the request through the stack so other layers can see who the user is.
While you are generally right here I wonder how common this is with middlewares. Many have order dependencies and there are normally no loops involved. I don’t think I have come across this for middlewares at least. Kinda curious about the particular motivation here.
Yeah I've been contemplating this with my own Datasette project recently: it doesn't have an official mechanism for "redispatch this request from the root again" but I've been tempted to add one.
My GraphQL plugin for example works by firing off internal requests against Datasette's REST API and I ended up needing some gnarly hacks to get authentication to work with that.
From my experience there are dragons if you try to make this work in general through a mechanism like that. I have only ever regretted this kind of stuff later when the interactions were not entirely clear.
I like this kind of mechanism (circuit-breaker) as a last ditch effort to prevent failures by then erroring out before it does more damage. I never made any good experiences with silently disabling stuff.
NextJS however is likely constrained by its architecture and the decision to use serverless and edge compute for the backend.
Relying on obscure headers for conditional logic this way is certainly one way to avoid bringing in an extra dependency. And the middleware concept itself is fairly primitive compared to what you could do in any server-side API.
Arguably, though, the middleware itself is being trusted as the entry-point to the API when it’s barely more than a reverse proxy. It’s not really a vulnerability if you only auth’d the middleware and not your actual routes.
Ah ok, yeah.. unfortnutely this type of lag/mismgmt is pretty common once a company gets big enough. Often times the right people don't get involved on first-pass... even at tech-first companies like this -- though at that point perhaps you're no longer tech-first :/
Same here. I miss the days when you were given a take home or, god forbid, a real world problem, something the company has run into that you can work through with an experienced interviewer.
Some of the worst interviews I've had I've felt like I was a lab rat, just being evaluated for specific traits/reactions. The worst part is, like you said, every Tom, Dick, and Henry does this, its somehow become some kind of lazy industry standard hiring practice, hence the explosion of these interview aid services. Some people will argue it serves a purpose, but I really don't see how. It's like memorizing an answer sheet and doing well on a test. Is that really the type of problem solving you're looking for? Don't get me wrong, I do think its important to understand foundational concepts and algorithims, but like.. it's also the first thing thats abstracted away for good reason in the real world. Anyways its shit.
The worst part is even with my experience I still feel like shit after these calls though I should know better.
I've met people who believe that there are good guys and bad guys, that they are the good guys, and that there should be no protections for the bad guys.
You can't convince them that to others, or in the future, they might be seen as the bad guys. Because that just isn't true -- they're the good guys.
Yeah I agree, but I think this post is for those cases where this design might be inappropriate, mainly monoliths with single dbs.
I disagree with the whole "you're not Google/FB"/"over arbitary RPS" logic though. If the design makes sense then it makes sense, end of story. Just understand it.. lol