SEEKING WORK | California, USA | Remote Fractional CTO specializing in cloud infrastructure, full-stack development, and payment systems. 15+ years transforming complex technical challenges into scalable business solutions.
Core Expertise:
Payment Systems: Stripe Connect marketplaces, PCI compliance Cloud & Infrastructure: AWS, GCP, Azure deployment and optimization, Kubernetes, infrastructure-as-code Healthcare Tech: HIPAA-compliant architectures, API Architecture: 3rd-party integrations (Stripe, Twilio, SendGrid, Calendly, DocuSeal) Full-Stack: Modern web applications, system architecture, 99.9%+ uptime SLAs
Built healthcare SaaS platform with Stripe Connect marketplace infrastructure Reduced infrastructure costs by 40-60% through cloud optimization Deployed production Kubernetes clusters with automated CI/CD pipelines
Ideal Projects: Healthcare/HealthTech, Professional Services, B2B Marketplaces. Particularly interested in high-automation tech stacks, rapid scaling challenges, and technical transformations. Availability: 10-15 hours/week Rate: $250/hour Email: adamel { at } {g mail dot com}
The product/website itself is interesting as a founder who believes heavily in implementing simulations to rigourously test complex systems. However I noticed lots of screenshots and less substance about how it actually works. If your ICP is technical, the frontend and marketing shouldn't be overdone IMO.
I need substance and clear explanations of models, methodology, concepts with some visual support. Screenshots of the product are great but a quick real or two showing different examples or scenarios may be better.
I'm also skeptical many people who are already technical and already using AI tools will now want to use YOUR tool to conduct simulation based testing instead of creating their own. The deeper and more complex the simulation, the less likely your tool can adapt to specific business models and their core logic.
This is party of the irony of AI and YC startups, LOTS of people creating this interesting pieces of software with AI when part of the huge moat that AI provides is being able to more quickly create your own software. As it evolves, the SaaS model may face serious trouble except in the most valuable (e.g. complex and/or highly scalable) solutions already available with good value.
However simulations ARE important and they can take a ton of time to develop or get right, so I would agree this could be an interesting market if people give it a chance and it's well designed to support different stacks and business logic scenarios.
OP here - I appreciate the feedback and you taking the time to look at the product/website beyond my personal blog post and learnings!
> If your ICP is technical, the frontend and marketing shouldn't be overdone IMO.
Great point. The ICP is technical, so this is certainly valid.
> I need substance and clear explanations of models, methodology, concepts with some visual support. Screenshots of the product are great but a quick real or two showing different examples or scenarios may be better.
We're working hard to get to something folks can try out more easily (hopefully one day Show HN-worthy) and better documentation to go with it. We don't have it yet unfortunately, which is why the site is what it is (for now).
>I'm also skeptical many people who are already technical and already using AI tools will now want to use YOUR tool to conduct simulation based testing instead of creating their own.
Ironically, we'd first assumed simulations would be easy to generate with AI (that's part of why we attempted to do this!) but 18+ months of R&D later and it's turned out to be something very challenging to do, never mind to replicate.
I do think AI will continue to make building SaaS easier but I think there are certain complex products, simulations included (although we'll see), that are just too difficult to build yourself in most cases.
To some extent, as I think about this, I suppose build vs. buy has somewhat always been true for SaaS and it's a matter of cost versus effort (and what else you could do with that effort). E.g. do you architect your own database solution or just use Supabase?
> However simulations ARE important and they can take a ton of time to develop or get right, so I would agree this could be an interesting market if people give it a chance and it's well designed to support different stacks and business logic scenarios.
I appreciate this, and it's certainly been our experience! We're still working to get it right, but it's something I'm quite excited about.
This is so absolutely fundamental to US strategic advantage.
A huge reason we have so many unicorns is because doing business and scaling in the US is easier than EU or other places.
A huge part of why the Manhattan Project was successful was also because of substantial brain drain from Europe. I think Scott Galloway wrote about this or may have popularized it.
If you're only talking about the exceptional sure. But when Microsoft fires x and applies for ~x H1Bs the same day... That doesn't seem like what you're talking about at all.
If an employee is exceptional and a skilled unicorn wrangler... 100K is nothing.
Not sure if it applies to H-1B but if a company does mass layoffs, it automatically makes it so that the PERM applications (required for green card, which you need to keep the employee past the visa validity period + extensions; up to 7 years iirc) will be automatically rejected for some time. So it screws over your existing H-1B holders, making your company way less attractive.
Source: I came to the US on H-1B in 2012. I may be misremembering which stage of the process the mass layoffs affect.
Part of the problem is you don't know ahead of time (certainly not with 100% certainty) who's going to be an exceptional unicorn wrangler, and who's just going to be a pretty good engineer, unless they already have an incredible track record elsewhere. This will filter out a lot of possible future unicorn wranglers.
I've read brain drain in this thread multiple times. I might agree this happened back then, but I don't know what people mean by it right now. Where is the term coming from suddenly and why is it used to uncritical?
In this thread it's thrown around as if everyone is referring to something specific related to immigration.
Edit:// checked US news. I can see what you all refer to now. To explain media seems to assume the US is having a "brain drain" because of fleeing scientists, some other countries make fun of it and call it their "brain gain"
In New Zealand the brain drain discussion has been going on for decades. We are remote, have a limited economy, wages are low. As a result, many smart kids graduate from university, go travel overseas (particularly Australia and the UK), find jobs with better wages, and never come home. It's referred to in the media as the brain drain.
Use a password manager and use a SEPARATE second factor authenticator not tied to the password manager. I personally use Authy (though I think it's been deprecated) and Bitwarden.
I recently got a Google scam call from someone using Google Voice in the bay area (650 number) claiming to be with Google and that an unauthorized device was trying to access my account. Eventually realized they were just trying to get my to unlock my account probably to drain bank accounts.
Same. I don't store my 2FA with my passwords. I also use Authy, I'd like to move to something else but as long as it's working. I was annoyed they got rid of the Mac app.
Same, the desktop app worked great. Probably for the best though, ideally you want to pull your codes from a phone and password from your desktop device.
First time I was able to install Codex with no NPM errors. Gonna give it a shot, seems a lot slower than Claude Code but I'm only using basic Pro with ChatGPT vs. Max 100.
Author makes some good points though, I think many of us are feeling a bit of AI fatigue. There's so many platforms and services available now, and a large group will likely be vaporware soon as the market battle plays out.
I'll admit these "far right" labels don't hold much weight, usually just a way to expose yourself (the author here). But I agree with much of the overall sentiment of the article. The AI hype feels a bit dystopian and I say that as someone who has been heavily using LLMs since 2023.
They're very useful but we also have to ask ourselves what the world will look like if we automate everyone out of a job.
Claude/Anthropic is more focused on productivity (Coding, Spreadsheets, Reports). ChatGPT seems more focused on general-purpose LLM (Research, Cooking, Writing, Image Generation).
Makes sense that MS would partner with Anthropic since their tool-use for productivity (Claude Code) seems superior. I personally rarely code with ChatGPT, almost strictly Claude.
Some people might be surprised that MS would pick the product with the best technological fit rather than the one they already have a deep business and financial relationship with.
Surely Microsoft's expertise these days is in cross-selling passable but not best-in-class products to enterprises who already pay for Microsoft products.
It says something about how they view the AI coding market, or perhaps the level of the gap between Anthropic and OpenAI here, that they've gone the other way.
Why is Azure popular? Not on its own merits, it's because there is a pre-existing relationship with Microsoft.
Why is Teams the most widely used chat tool? Certainly not because it's good.. it is, again, pre-existing business relationships.
Seems odd for a company that survives (perhaps even thrives) on these kinds of intertwined business reasons to, themselves, understand that they should go for merit instead.
Yep. Similarly, Microsoft Entra... if you want Office, you're getting it anyway. Might as well use it for SSO, right? And here's your free Teams license... how can you justify paying for Slack when we've a perfectly good chat client at home?
Except nobody chooses M365 Copilot over ChatGPT or Claude, so clearly the usual reasons aren't working. In this case, improving the product via integration is a last resort.
> It says something about how they view the AI coding market
I think Microsoft views models as a commodity and they'd rather lean into their strengths as a tool maker, so this is Microsoft putting themselves into a position to make tools around/for any AI/LLM model, not just ones they have a partnership with.
Honestly I think this sort of agnosticism around AI will work out well for them.
I've been happy with Anthropic models. I also have been using the Google models more, with decent results. The Copilot/OpenAI models don't seem to be as good as a rule of thumb, can't explain exactly why.
Overall, I think Google has a better breadth of knowledge encoded, but Anthropic gets work done better.
The new gpt-codex-* models are giving Claude Code a serious run for its money IMO. If OpenAI can figure out the Codex CLI UI (better permissions, more back and forth before executing) then I think they will have the better agentic coder.
I like perplexity's deep research model which is based on deepseek i think. i use that for most kind of writing, discussion, research, etc. where I need some kind of feedback. Claude seems to go crazy sometimes when you ask it to do the same task. Whereas for coding, Claude Code is obviously better than everything else under the sun.
I decided to give perplexity another try a few days ago, and it still seems to hallucinate things. Given the same exact tasks/prompts both Claude and Chatgpt got the facts correct.
Perplexity uses those same models without "deep research" on, don't see how the result would be any different. I haven't gotten have any problem with it. Claude should be good but they rate limit too much their desktop and site it's almost unusable every time I tried.
I'd argue that Anthropic still has a hard edge on creativity for things like emulating people's comments.
I've fed into several models my past reddit comments (with the comments it's responding to) and asked it to duplicate the style. Claude has always been the only thing that comes even close to original responses that even I think would be exactly my response, wording and all.
GPT or Gemini will just borrow snippets from the example text and just smoosh it together to make semi-coherent points. Scratch that. They're coherent, but they're just unmistakably not from me.
GPT-5 is pretty decent nowadays, but Claude 4 Sonnet is superior in most cases. GPT beats it in cost and usable context window when something quite complex comes up to plan top-down.
What I find interesting is how much opinions vary on this. Open a different thread and people will seem to have consensus on GPT or Gemini being superior.
Well, last I checked Claude's webchat UI doesn't have LaTeX rendering for output messages which is extremely annoying.
On the other hand, I wish ChatGPT had GitHub integration in Projects, not just in Codex.
I've also had Claude Sonnet 4.0 Thinking spew forth incorrect answers many times for complex problems involving some math (with incapability to write a former proof sometimes), whereas ChatGPT 5 Thinking gives me correct answers with formal proof.
I think it depends on the domain. For example, GPT-5 is better for frontend, React code, but struggles with niche things like Nix. Claude's UI designs are not as pretty as GPT-5's.
This is also pretty subjective. I’m a power user of both and tend to prefer Claude’s UI about 70-80% of the time.
I often would use Claude to do a “make it pretty” pass after implementation with GPT-5. I find Claude’s spatial and visual understanding when dealing with frontend to be better.
I am sure others will have the exact opposite experience.
My experience is exactly opposite. Claude excelling in ui, and react. While gpt5 being better on really niche stuff, migth just be me better at caching when gpt5 halucinates as opposed to the claude4 hallucinations.
But after openai started gatekeeping all their new decent models in the api, i will happily refuse to buy more credits, and rather use foss models from other providers (I wish claude had proper no log policies).
I never implied it's useless. I don't have scientific data to back this up either, this is just my personal "feeling" from a couple hundred hours I've spent working with these models this year: GPT-5 seems a bit better at top-down architectural work, while Sonnet is better at the detail coding level. In terms of usable context window, again from personal experience so far, to me GPT-5 has somewhat of an edge.
Agreed. My experience is GPT5 is significantly better at large-scale planning & architecture (at least for the kind of stuff I care about which is strongly typed functional systems), and then Sonnet is much better at executing the plan. GPT5 is also better at code reviews and finding subtle mistakes if you prompt it well enough, but not totally reliable. Claude Code fills its context window and re-compacts often enough that I have to plan around it, so I'm surprised it's larger than GPT's.
That's always the issue with these directory indexer services which try to scrape lots of data. The quality is always questionable. If users actually adopt it you can have them report or fix the data based on consensus but users won't even bother if they find 1/3 of your listings are complete junk and your platform actually wastes time.
SEEKING WORK | California, USA | Remote
Freelance engineer specializing in cloud infrastructure, full-stack development, and payment systems. 15+ years transforming complex technical challenges into scalable business solutions.
Core Expertise:
Payment Systems: Stripe Connect marketplaces, multi-party payment flows, PCI compliance
Cloud & Infrastructure: AWS, GCP, Azure deployment and optimization, Kubernetes, infrastructure-as-code
Healthcare Tech: HIPAA-compliant architectures, EMR integrations, telehealth platforms
API Architecture: 3rd-party integrations (Stripe, Twilio, SendGrid, Calendly, DocuSeal)
Full-Stack: Modern web applications, system architecture, 99.9%+ uptime SLAs
Built healthcare SaaS platform with Stripe Connect marketplace infrastructure
Reduced infrastructure costs by 40-60% through cloud optimization
Deployed production Kubernetes clusters with automated CI/CD pipelines
Integrated 10+ EMR systems into unified data model
Ideal Projects: Healthcare/HealthTech, Professional Services, B2B Marketplaces. Particularly interested in high-automation tech stacks, rapid scaling challenges, and technical transformations.
Availability: 10-15 hours/week
Rate: $250/hour
Email: adamel { at } {g mail dot com}