I don't work at Open AI but I use Codex as I imagine most people there do to.
I actually use it from the web app not the cli. So far I've run over 100 codex sessions a great percentage of which I turned in to pull requests.
I kick off codex for 1 or more tasks and then review the code later. So they run in the background while I do other things. Occasionally I need to re-prompt if I don't like the results.
If I like the code I create a PR and test it locally. I would say 90% of my PR's are AI generated (with human in the loop).
Since using codex, I very rarely create hand written PR's.
I imagine they mean a remote KVM. So you remote into a PC sitting in a basement in someones house in the US. You then make all your outgoing internet from thta setup and your IP address would look legit.
* Tooling improvements benefit everyone. Maybe that's a faster compiler, an improved linter, code search, code review tools, bug database integration, a presubmit check that formats your docs - it doesn't matter, everyone has access to it. Otherwise you get different teams maintaining different things. In 8 years at Microsoft my team went through at least four CI/CD pipelines (OK, not really CD), most of which were different from what most other teams in Windows were doing to say nothing of Office - despite us all writing Win32 C++ stored in Source Depot (Perforce) and later Git.
* Much easier refactors. If everything is an API and you need to maintain five previous versions because teams X, Y, Z are on versions 12, 17, and 21 it is utter hell. With a unified monorepo you can just do the refactor on all callers.
* It builds a culture of sharing code and reuse. If you can search everyone's code and read everyone's code you can not only borrow ideas but easily consume shared helpers. This is much more difficult in polyrepo because of aforementioned versioning hell.
* A single source of truth. Server X is running at CL #123, Server Y at CL #145, but you can quickly understand what that means because it's all one source control and you don't have to compare different commit numbers - higher is newer, end of story.
> What are the advantages vs having a mono repo per team?
If you have two internal services you can change them simultaneously. This is really useful for debugging using git bisect as you always have a code that passes the CI.
I might write a detailed blog about this at some point.
One of the big advantages is visibility. You can be aware of what other people are doing because you can see it. They'll naturally come talk to you (or vice versa) if they discover issues or want to use it. It also makes it much easier to detect breakages/incompatibilities between changes, since the state of the "code universe" is effectively atomic.
Not sure if I get it. If you are using a product like Github Enterprise, you are already quite aware of what other people are doing. You have a lot of visibility, source-code search, etc. If you have a CICD that auto-creates issues you already can detect breakages, incompatibilities, etc.
State of the "code universe" being atomic seems like a single point of failure.
Imagine team A vendors into their repo team B's code and starts adding their own little patches.
Team B has no idea this is happening, as they only review code in repo B.
Soon enough team A stops updating their dependency, and now you have two completely different libraries doing the "same" thing.
Alternatively, team A simple pins their dependency to team B's repo at hash 12345, then just, never updates... How is team B going to catch bugs that their HEAD introduces on team A's repo?
This is already caught by multi-repo tooling like Github today. If you vendor in an outdated version with security vulnerabilities, issues are automatically raised on your repo. Team B doesn't need to do anything. It is Team-A's responsibility to adopt to latest changes.
Curious because I haven't seen this myself. Do you mean, GitHub detects outdated submodule references? Or, GitHub detects copy of code existing in another repo, and said code has had some patches upstream?
(In our org, we have our own custom Go tool that handles more sophisticated cases like analyzing our divergent forks and upstream commits and raising PR's not just for dependencies, but for features. Only works when upstream refactoring is moderate though)
I’ve been wanting to write this somewhere and this seems as good a place as any to start.
Is it just me or is MCP a really bad idea?
We seem to have spent the last 10 years trying to make computing more secure and now people are using node & npx - tools with a less than flawless safety story - to install tools and make them available to a black box LLM that they trust to be non-harmful. On what basis, even about accidental harm I am not sure.
Tesla in Berlin employs 12,000 workers and can produce 5000 cars a week.
The US will need a lot of factories to employ 10s of millions of workers.I also imagine new factories will employ less workers due to increased automation.
Taking something that is basically a lonely depressing activity and putting a social aspect around it.
Well done.