Hacker Newsnew | past | comments | ask | show | jobs | submit | lstamour's favoriteslogin

You're correct, there are multiple flavors of Google Cloud Locations. The "Google concrete" ones are listed at google.com/datacenters and London isn't on that list, today.

cloud.google.com/about/locations lists all the locations that GCE offers service, which is a super set of the large facilities that someone would call a "Google Datacenter". I liked to mostly refer to the distinction as Google concrete (we built the building) or not. Ultimately, even in locations that are shared colo spaces, or rented, it's still Google putting custom racks there, integrating into the network and services, etc. So from a customer perspective, you should pick the right location for you. If that happens to be in a facility where Google poured the concrete, great! If not, it's not the end of the world.

P.S., I swear the certification PDFs used to include this information (e.g., https://cloud.google.com/security/compliance/iso-27018?hl=en) but now these are all behind "Contact Sales" and some new Certification Manager page in the console.

Edit: Yes! https://cloud.google.com/docs/geography-and-regions still says:

> These data centers might be owned by Google and listed on the Google Cloud locations page, or they might be leased from third-party data center providers. For the full list of data center locations for Google Cloud, see our ISO/IEC 27001 certificate. Regardless of whether the data center is owned or leased, Google Cloud selects data centers and designs its infrastructure to provide a uniform level of performance, security, and reliability.

So someone can probably use web.archive.org to get the ISO-27001 certificate PDF from whenever the last time it was still up.


It was this article! Self-hosting forms, the sane way https://karelvo.com/blog/selfhosting-forms-the-sane-way https://news.ycombinator.com/item?id=40179398

N8n for open source workflow, https://github.com/n8n-io/n8n

Nocodb as an open source airtable / spreadsheet, https://github.com/nocodb/nocodb

Again, for a single use this feels overkill. But it's a set of tools that once deployed will allow for extremely rapid personal data pipelines to be built!


Oh sweet Lord one of my proudest moments at a college that I used to work for was the getting to know you thing. The last question was always a "fun" question. Mine was why don't they make planes out of the black box material.

I wrote seven pages with diagrams, charts, and explanation of the weights and air resistances of various metal alloys that most planes are made of. There were foot notes and an additional two pages of citations.

And on the tenth page was just one line, "I made all of that up. I hope you enjoyed reading this."

I got so much hate mail from the physics department. It was amazing.


After trying various solutions - including DeskPad - I came up with a custom cross-platform (I'm on macOS, but assume it'll work elsewhere) solution that worked incredibly well on my 40" ultrawide monitor: OBS[1].

Having never used OBS before but knowing it was popular among streamers, I wondered if I could use it to (1) only share the specific applications I wanted to share and (2) share them at a resolution that people could actually read, without constantly being asked to zoom in.

I first tried setting up a virtual camera and sharing via my video stream, but it was laggy and the quality was so poor that people couldn't read what I was sharing. I quickly gave up on that approach.

Then I discovered Projectors[2]. By right-clicking on the main view in OBS and selecting "Windowed Projector (Preview)", it launches a separate window, which I can then share directly via Zoom, Teams, Meet, etc.

Whatever I drag into the OBS view is displayed in the Windowed Projector (similar to DeskPad), with the added bonus that I can choose to blur certain applications that might be dragged in. For example, if I open Slack or my password manager, the entire window blurs until I focus back on my terminal or browser.

It took a bunch of tweaking to perfect, but I'm very pleased with how well it works now.

---

[1] https://obsproject.com/

[2] https://obsproject.com/kb/power-of-projectors


Trunk | https://trunk.io | Sr Data Engineer / DevRel Engineer| Full-Time | Hybrid SF or Remote US or Canada

Trunk is an a16z funded dev tools startup, redefining software development at scale. We aim to solve problems that developers hate by bringing the tools usually built in-house at the best engineering orgs to every development team. We've built 4 products so far and have plans for more:

  * Code Quality: a universal linter/formatter, available as a CLI, VSCode extension, and CI check;

  * Merge Queue: a merge queue, to ensure that PRs are tested in order before they're merged; and

  * CI Analytics: detects, quarantines, and eliminates flaky tests from your code base. Prevents flakey tests from producing noise and blocking CI.

  *Flaky Tests: Detect and eliminate Flaky Tests.
In 2022, we raised a $25M Series A led by Initialized Capital (Garry Tan) and a16z (Peter Levine)

Our tech stack:

  * Frontend: Typescript, React, Redux, Next.js
  * Backend: Typescript, Node, AWS, CDK, k8s, gRPC
  * Observability: Prometheus, Grafana, Kiali, Jaeger
  * CLI: C++20, Bazel
  * VSCode Extension: Typescript
  * CI/CD: GitHub Actions
  * General: GitHub, Slack, Linear, Slite
Unlimited PTO (and we all actually take PTO), competitive salary and equity packages! Please apply here: https://trunk.io/jobs

I know Cirrus CI uses (and I think developed(?) Tart^1 as a scriptable VMM for macOS CI.

[1]: https://tart.run/


>The Valid method takes a context (which is optional but has been useful for me in the past) and returns a map. If there is a problem with a field, its name is used as the key, and a human-readable explanation of the issue is set as the value.

I used to do this, but ever since reading Lexi Lambda's "Parse, Don't Validate," [0] I've found validators to be much more error-prone than leveraging Go's built-in type checker.

For example, imagine you wanted to defend against the user picking an illegal username. Like you want to make sure the user can't ever specify a username with angle brackets in it.

With the Validator approach, you have to remember to call the validator on 100% of code paths where the username value comes from an untrusted source.

Instead of using a validator, you can do this:

    type Username struct {
      value string
    }

    func NewUsername(username string) (Username, error) {
      // Validate the username adheres to our schema.
      ...

      return Username{username}
    }
That guarantees that you can never forget to validate the username through any codepath. If you have a Username object, you know that it was validated because there was no other way to create the object.

[0] https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...


> doesn't clean itself after installation and easily fills 50-100 gigabytes with useless "simulators" that no one seems to understand why is needed for most devs.

Not the case anymore, Go to Settings->General->Storage in macOS where you can manage the storage space. If you tap on the "i" next to the "Developer", you will be given option to delete caches, indexes, iOS device support(which can be huge).


One of the easiest approache is torch.compile, it's the latest iteration of pytorch compiler (previous methods were : TorchScript and FX Tracing.)

You simply write model = torch.compile(model)

"Across these 163 open-source models torch.compile works 93% of time, and the model runs 43% faster in training on an NVIDIA A100 GPU. At Float32 precision, it runs 21% faster on average and at AMP Precision it runs 51% faster on average."[1]

What google is trying to do, is to involve more people in the R&D of these kind of methods.

[1]https://pytorch.org/get-started/pytorch-2.0/


2c: if you need PostgreSQL elsewhere in your app anyway, then store your event data in PostgreSQL + FOSS reporting tools (apache superset, metabase, etc) until you hit ~2TB. After that, decide if you need 2TB online or just need daily/hourly summaries - if so, stick with PostgreSQL forever[1]. I have one client with 10TB+ and 1500 events per sec @ 600 bytes/rec (80GB/day before indexing), 2 days of detail online and the rest summarized and details moved to S3 where they can still query via Athena SQL[2]. They're paying <$2K for everything, including a reporting portal for their clients. AWS RDS multi-AZ with auto-failover (db.m7g.2xlarge) serving both inserts and reporting queries at <2% load. One engineer spends <5 hours per MONTH maintaining everything, in part because the business team builds their own charts/graphs.

Sure, with proprietary tools you get a dozen charts "out of the box" but with pgsql, your data is one place, there's one system to learn, one system to keep online/replicate/backup/restore, one system to secure, one system to scale, one vendor (vendor-equivalent) to manage and millions of engineers who know the system. Building a dozen charts takes an hour in systems like preset or metabase, and non-technical people can do it.

Note: I'm biased, but over 2 decades I've seen databases and reporting systems come & go, and good ol' PostgreSQL just gets better every year.

https://instances.vantage.sh/aws/rds/db.m7g.2xlarge?region=u...

[1] if you really need, there's PostgreSQL-compatible systems for additional scaling: Aurora for another 3-5x scaling, TimescaleDB for 10x, CitusDB for 10x+. With each, there's tradeoffs for being slightly-non-standard and thus I don't recommend using them until you really need.

[2] customer reporting dashboards require sub-second response, which is provided by PostgreSQL queries to indexed summary tables; Athena delivers in 1-2 sec via parallel scans.


Spring Boot and Spring cloud for backend & graphql for the win. ;-)

I find that moving the full query system into the front end is where most front end devs really want to be. They want a full power query system for the data instead of continuous rounds of re-inventing the transport layer, REST, GraphQL, *RPC, etc.

It's hard to adopt such a system in most traditional web shops with their specialized backend and frontend teams. You're pulling out the database, backend, transport, and auth layers and replacing them with this single block system. Most system architects grew up in the backend so they are generally pretty ignorant of this issue. As it touches both sides extensively you're probably not fitting this into an existing system, which leaves only green field new development. Finally your backend is not an AWS or Asure service, neither is it lambda friendly. All of this means that most architect types I talk to will never touch it.

This style of system mostly already exists with old tech, CouchDB+PouchDB. Which works pretty well for some things. The downsides are that the query system isn't really ideal and the auth and data scoping system is pretty foreign to most people. The easiest model to work with is when the data is totally owned by a single user, and then you use the out-of-the-box database-per-user model. High data segmentation with CRDTs removes a lot of conflict issues.

It has scaling issues though, CouchDB has really high CPU requirements when you're connecting 10k to 100k users. The tech is long in the tooth though it is maintained. On the system design side it gets really complicated when you start sharing data between users, which makes it rather unsuitable as you're just moving the complexity rather than solving it.

This approach seems to hit the same target though will likely have similar scaling issues.

Look forward to see the evolution of the system. Looks like a first step into the world.


I write safety critical code, so I've worked with many of these: frama-C, TIS, Polyspace, KLEE, CBMC, and some that aren't on the list like RV-match, axivion, and sonarqube.

With virtually any tool in this category, you need to spend a good amount of time ensuring it's integrated properly at the center of your workflows. They all have serious caveats and usually false positives that need to be addressed before you develop warning blindness.

You also need to be extremely careful about what guarantees they're actually giving you and read any associated formal methods documentation very carefully. Even a tool that formally proves the absence of certain classes of errors only does so under certain preconditions, which may not encompass everything you're supporting. I once found a theoretical issue in a project that had a nice formal proof because the semantics of some of the tooling implicitly assumed C99, so certain conforming (but stupid) C89 implementations could violate the guarantees.

But some notes:

Frama-C is a nice design, but installing it is a pain, ACSL is badly documented/hard to learn, and you really want the proprietary stuff for anything nontrivial.

TrustInSoft (TIS) has nice enterprise tooling, but their "open source" stuff isn't worth bothering with. Also, they make it impossible to introduce to a company without going through the enterprise sales funnel, so I've never successfully gotten it introduced because they suck at enterprise sales compared to e.g. Mathworks.

RV-match (https://runtimeverification.com/match) is what I've adopted for my open source code that needs tooling beyond sanitizers because it's nice to work with and has reasonable detection rates without too many false positives.

Polyspace, sonarqube, axivion, compiler warnings, etc are all pretty subpar tools in my experience. I've never seen them work well in this space. They're usually either not detecting anything useful or falsely erroring on obviously harmless constructs, often both.


I suggest you work with a copywriter and come up with a better way of presenting windmill.dev. How you're presenting it makes sense to you, because you built it. You know what you mean when you say words like "boilerplate" and "business logic". That has a specific meaning in your head. But it's different for every person in the software industry. To a customer, it's confusing.

For investors:

> We are a developer platform for enterprise, offering a performance-focused solution for building internal software using code. Our open-source platform allows businesses to focus on their unique business logic while we handle the boilerplate tasks, resulting in high-velocity development and the ability to scale at enterprise levels.

For devs:

> We are a developer platform for enterprise, providing a system for running code written in Python, Typescript, Go, Bash, and query languages at scale. Our platform focuses on performance and offers extensive capabilities to build internal software, including APIs, workflows, background jobs, and UIs. With a fully open-source and flexible architecture, developers can concentrate on their business logic while leveraging our robust ecosystem for efficient development and hosting options.

Generic:

> Windmill is a developer platform designed for enterprises to build internal software efficiently. It combines the speed of low-code solutions with the flexibility of coding, allowing developers to focus on unique business logic rather than boilerplate tasks like permissioning, queuing, and front-end development. Our platform is fully open-source, offering high performance and hosting versatility.

> Our active community of developer users provides constant feedback, driving our platform's growth. By emphasizing commonly used languages such as Python, TypeScript, Go, and Bash, Windmill serves as a generalist tool without compromising quality, enabling complex functionalities within a production-ready, enterprise-scale environment.


The only supplement you can't get from a vegan diet is B12, because farming is too clean (it is in dirt, and eating dirt is how you'd get the B12 needed). Everything else you can get from eating plant based products.

You do not need heavily processed oils and grains nor do you need a variety of supplements. Beans and Rice contain all of the necessary proteins a human body needs for example.

B12 deficiency is an issue even for people that eat meat due to the cleaner practices and feeding where animals no longer graze but instead get their food delivered in a way that doesn't allow the animals to ingest the B12 from the dirt.

You can absolutely survive eating food that is not cooked with oil.


Going to swing at this in the 3 ways I understand Armstrong to have impacted music:

Cornet/Trumpet playing:

Here's [Dipper Mouth Blues](https://www.youtube.com/watch?v=PwpriGltf9g&pp=ygUaIkRpcHBlc...) from 1923 by King Oliver. Note how how every part is really "interlocked" together.

Here's [West End Blues](https://www.youtube.com/watch?v=4WPCBieSESI&pp=ygUVIldlc3QgR...) from 1928 by Armstrong. Each instrumentalist is showing a _lot_ more technicality in their solos, and the solos are much longer and more isolated. That's one of the big keys -- instrumentalists soloing on their own while the band backs them.

Jazz singing:

Here's Al Jolson in 1922 doing [Toot, Toot, Tootsie!](https://www.youtube.com/watch?v=rlv4b9UCk0c&pp=ygUOQWwgSm9sc...). It's very vaudeville, very Broadway.

Then here's Armstrong in 1926 doing [Heebie Jeebies](https://www.youtube.com/watch?v=qEBMXJwQhNU&pp=ygUTSGVlYmllI...). It's much more personal, charismatic, and swinging. He's using some scatting!

Improv style:

1917 [Livery Stable Blues](https://www.youtube.com/watch?v=5WojNaU4-kI&pp=ygUYTGl2ZXJ5I...)

1927 [Potato Head Blues](https://www.youtube.com/watch?v=AeBn_TZ4Iak&pp=ygUmcG90YXRvI...)

---

Now let's go forward 20 years to Dizzy Gillespie [Salt Peanuts](https://www.youtube.com/watch?v=gg1Wl-NmzWg&pp=ygUhZGl6enkgZ...). I think it's clear how Louie inspired _so much_ of what Gillespie and the orchestra are doing here. That would go on to morph so many ways over the next 60 years.

Hope that helps. I'm no expert -- just a guy who went to school for music and played trumpet, listened to a lot of Armstrong.


I don't want some ridiculous 3 ton electric truck that needs a 131 kWh battery.

I want an adorable, tiny EV Mitsubishi Delica for $13,000 new. https://www.mitsubishi-motors.com/en/newsrelease/2023/detail...

Or this adorable, tiny EV SUV for about the same price: https://www.thedrive.com/news/gms-tiny-electric-pickup-is-an...


Caution, this tool uses AES in OFB mode[0] to encrypt/decrypt the file, without any guarantee of the ciphertext integrity(no MAC).

[0]: https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation...


Totally unrelated, but since we are talking about QOL tools on macOS, i thoroughly recommend BetterDisplay[0]

It enables reting scaling functionality on any external monitor, regardless of the resolution or the Apple compatibility.

It's great for 2k monitors that are totally hiDPI but are not deemed enough by Apple, and even for FHD secondarh displays that don't need that much display real state so you can use that real state to scale everything nicely.

[0]: https://github.com/waydabber/BetterDisplay


Older programmers like you are not old enough. (-:

ASCII had variable numbers of bytes per glyph back in the 1960s, and the Teletype Model 37 semantics of combining-backspace and combining-underscore still exist in modern operating systems, like the various BSDs and Linux-based ones, to this day. It's a combining-characters multibyte encoding understood by programs from less(1) through ul(1) to nroff(1).

ASCII had multiple transfer encodings, from 8N1 to 7E2, many of which were neither 7-bit nor 8-bit; and 1 byte of RAM was not 1 ASCII character. 1 byte of RAM not being 1 ASCII character is, after all, what brought about "extended ASCII".

Along with that came ECMA-35, with its GR and GL, and code pages. All of those broke theretofore held truths such as that one could just strip the parity bit, actually introduced the false idea (that many people had to keep correcting) that ASCII was 8-bit, and required a full-blown state machine for parsing ECMA-35 escape sequences.

Then there are the tales that old programmers can tell about how they couldn't actually deal in English because ASCII didn't have a pound symbol, and what was "English" was actually American (which was, after all, the "A" in "ASCII").

There is a whole generation of older programmers who laugh hollowly at the idea of "the simplicity of ASCII"; when the amount of time that they spent fiddling with serial protocol settings, transcoding between code pages, handling language variant characters in Teletext, wondering why an 8-bit-clean transport protocol such as TCP didn't mean that SMTP could be 8-bit clean, and doing other such stuff, probably came to years in total spent on all of this "simplicity".


I'm doing an experiment with AI posting on Reddit accounts to see if they would get banned. I bought 100 few-week-old accounts from some sketchy site for $0.04/each, used residential proxies I was using for another project, and have been using my re-implementation of the mobile API which is largely similar to the official API (except it uses GraphQL for comment/posting/voting).

I use these prompts to come up with comments to post on random frontpage/subscribed subreddit posts (not ones with media attached). I also randomly upvote posts and search trending terms. Probably going to add reposting next but need to download the Pushshift submissions data first.

    SystemPrompt: `You are a Reddit user responding to a post.  Write a single witty but informative comment.  Respond ONLY with the comment text.
    Follow these rules:
    - You must ALWAYS be extremely concise! 99% of the time, your lines should be a sentence or two.
    - Summarize your response to be as brief as possible.
    - Avoid using emojis unless it is necessary.
    - NEVER generate URLs or links.
    - Don't refer to yourself as AI. Write your response as if you're a real person.
    - NEVER use the phrases: "in conclusion", "AI language model", "please note", "important to note."
    - Be friendly and engaging in your response.`,
    UserPrompt: `Subreddit: "%s"
    Title: "%s"
    `,
Here's the longest running one: https://old.reddit.com/user/Objective_Land_2849

Current problem is that the responses typically range from cynical to way too enthusiastic.


I share many of your concerns and frustrations, although I suspect what you're asking for their consider a moat along the lines of a trade secret, rivaled only by the collection of performance improvement techniques they've amassed in 1000s-10,000s of training runs, 100s of engineers, and (hundreds of?) millions spent on compute. People are hired and praised in the community for their skill in cleaning data.

A non-answer for you, but for curious others, [State of GPT] 10 days ago provides a through introduction to the process used to train, Karpathy speaking at a Microsoft event providing a deep summary review of concepts, training phases, and techniques proving useful in the world of Generative Pretrained Transformers.

[State of GPT]: https://www.youtube.com/watch?v=bZQun8Y4L2A


This is a fine list, but it only covers a specific type of generative AI. Any set of resources about AI in general has to at least include the truly canonical Norvig & Russel textbook [1].

Probably also canonical are Goodfellow's Deep Learning [2], Koller & Friedman's PGMs [3], the Krizhevsky ImageNet paper [4], the original GAN [5], and arguably also the AlphaGo paper [6] and the Atari DQN paper [7].

[1] https://aima.cs.berkeley.edu/

[2] https://www.deeplearningbook.org/

[3] https://www.amazon.com/Probabilistic-Graphical-Models-Princi...

[4] https://proceedings.neurips.cc/paper_files/paper/2012/file/c...

[5] https://arxiv.org/abs/1406.2661

[6] https://www.nature.com/articles/nature16961

[7] https://www.nature.com/articles/nature14236


For personal use I can recommend this Obsidian plugin. It turns a markdown file into a Kanban board.

https://github.com/mgmeyers/obsidian-kanban


I think the "Hacker News readers" and "People who like dithering" Venn diagram has converged slowly over time. I'm a happy resident of the overlapping zone.

A classic visual explainer from the team behind Myst and Riven (who had to balance image quality and disk space) is archived here: http://web.archive.org/web/20230415173939/http://cho.cyan.co...

Obligatory link to Obra Dinn's fascinating dev log post, regarding the challenge of spatial and temporal coherence when dithering realtime 3D (for aesthetic reasons, ie. deliberately noticeable to the player): https://forums.tigsource.com/index.php?topic=40832.msg136374...

Lately I've been attempting to add the original dithering from Excel 97's easter egg to my online reproduction. In the era of indexed-color graphics, developers had to dither efficiently to reduce banding. Compare these two rendering techniques of the same subject, one with 16 shades of gray and dithering, and the other with 256 shades of gray:

https://rezmason.github.io/excel_97_egg/?shadingOnly=true&l=...

https://rezmason.github.io/excel_97_egg/?shadingOnly=true&l=...


I too have written force features. You've likely never seen this, but in tool design, prefer git push's --force-with-lease design.

Force With Lease says "I believe X is true, and because of that, force". The tool can check that X is indeed true, and if so force, but if it's not true the human was wrong and they ought to re-consider and obtain a new "lease" before we make the change.

In Git this "lease" is the current ref of a remote, if we specify this, but actually it's wrong, that means the state of the remote system has changed, and we need to re-consider whether our forced change is still appropriate. e.g. While you and Bill were quickly changing colors.js to hackily disable dark mode, turns out Sarah guessed the actual bug, and replaced main.css with a patched version that works fine even in dark mode, if you force-push your change, instead of zapping the broken change it zaps Sarah's fix!

This approach works best where you can actually take some sort of "lock" and avoid clashes at the tool level, but there's some benefit even without that at the human level.


I have a new house (built in 2020) and it did come with an inefficient, underpowered mechanical ventilation intake that just pulled outside air directly into the home. Given the house is in Austin, Texas where the weather outside rarely matches the conditions inside, this results in a lot of cooling or heating loss, as well as losing the humidity conditioning of the indoor air. It's no wonder that I regularly hear locals advising folks to simply turn off the mechanical ventilator; bringing in hot, humid summer air or cold, dry winter air is not good, even if the fresh air is.

That's why shortly after moving in, I had the garbage ventilator replaced with an ERV. Based on my research I selected a Renewaire EV-Premium L, a variable speed unit which does up to 280 CFM. That's a ton of fresh air, even for a 4300 sqft house. But importantly it preserves some of the temperature and humidity when exchanging the air, so it's a lot more efficient than a plain mechanical ventilator.

I have it wired into my Ecobee thermostat. It runs at its configurable low speed 24/7, and the Ecobee puts it in boost mode (full output) for 10-15 minutes per hour.

This setup did cost some money of course, but the $2000 cost was pretty minor in the context of a new home. With this setup, we never see a buildup of CO2, VOCs, etc indoors. Definitely worth it when we're both working from home and spending most of our time there. And although we have a gas cooktop, I never see any indoor air quality issues so long as we're running the hood while cooking.

I hope indoor air quality becomes a bigger priority for homes, offices, and schools. The building codes in this area haven't yet caught up to the need for improved ventilation to account for how much better sealed up modern buildings are.


I have not read the article yet, but this is what I predict the contents will be, namely "Cory Doctorow Wants You to Know He Has a New Book Coming Out."

I predict it will be an interview where Doctorow's trenchant commentary on how the emerging cyberdystopia can be meaningfully thwarted is by buying yet another novel by him about how a new technology is only being wielded incorrectly and if only the plucky hackers use it in transgressive ways, it will undermine the Institutional foundations of the technological order and not reinforce the status quo.


Off-topic, but something seems dangerously off with urlscan.io (a service I had never heard of before).

If I go to urlscan.io and look at the recently scanned sites (which are live-updated), every now and then I can find links with potentially sensitive information.

I found OneDrive and SharePoint links. I was unable to actually access the documents in them (it asked me to login), but I could see their content (or metadata) with UrlScan's "live screenshot" feature.

At one point, it scanned a "reset password" link with the authentication token in the query string (!). I was able to access that link and I would likely be able to reset the password for that specific user. I won't share the underlying website so others don't go ahead looking for it, but it was for a non-US government service.

The impression I have is that some email provider (or perhaps some antivirus software?) is automatically scanning user emails and the links are being shared publicly, alongside a "live screenshot".

I might be missing something, but this is weird.


I appreciated their other article: where OP explains how there are two segments of the antarctic research station: an American one, and an NZ one.

Their bank loses their mind when their bank card is physically used in America and then in NZ 30 minutes later and OP has to clear up how that isn't impossible.

> Being in Antarctica is weird, because you can travel back and forth between the “United States” (McMurdo) and “New Zealand” (Scott Base) in less than 30 minutes.

> Credit card companies have a hard time with this.

https://brr.fyi/posts/credit-card-shenanigans

Edge cases mang!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: