Don’t do this… unless you have a very specific and non-standard usecase that requires it, like the author:
> I work using git worktrees to have multiple branches checked out at the same time. By using this trick, I can run different commands and make sure the imported package is from that branch, instead of having to python3 -m pip install -e . around all the packages
For most usecases, you can wire up your imports and script entry points in poetry, and manage venvs there too. Or just use pyenv-virtualenv if you’re on MacOS to auto-use your per-project venv.
Again, the author's requirements here are quite esoteric in my experience, and I’d really encourage folks to avoid this particular can of worms if at all possible.
I agree my setup is not simple (or even recommended as the default), that's why I call them "tricks".
> you can wire up your imports and script entry points in poetry, and manage venvs there too. Or just use pyenv-virtualenv if you’re on MacOS to auto-use your per-project venv.
As someone who has tried most of the current Python tools (including poetry and pyenv), I wouldn't say that "wiring up" imports and entry points is much simpler or footgun-free. It also requires using specific tools, which may not be possible depending on the project.
Also, multiple venvs is not equivalent, since you still need to manage those venvs and install the packages (+ dependencies). The `PYTHONPATH` trick can save a lot of time when the dependency tree is complex, and you just want a single venv.
For example, a workflow like:
git worktree add -b feature_one
#
# edit some files
#
BRANCH=feature_one make run-service
# in a different terminal, assuming `master` is also checked-out as a worktree
BRANCH=master make run-service
With the `PYTHONPATH` trick, you can do this in a few seconds. Now you have the same service running two different versions of your package, without having to reinstall anything or caring about using the correct venv.
Again, I know this is a hack and it can be very project-specific. But it has worked great for me, and maybe it can be helpful to others.
Its quite sad to see the rationale as making it too complicated when "upgrading to venvs" when it should kill them outright. Why using python depends at all on weird shell semantics is beyond me.
There are quite a few things here that I've used for a few years with option projects at work - overall I like it.
A big part of that is due to an endless stream of people who couldn't be bothered to learn what a venv was or how to use it, but... yeah, that's what work is like sometimes. The makefile pretty much ended those issues, absolutely worth the effort put into it.
I recently discovered just (https://just.systems), a make alternative that’s much more ergonomic IMO, and have been using it in my personal projects. It’s task-oriented, not goal-oriented, so not a perfect replacement, but works well for my needs.
100% agree, and this bothers me a lot. I like `just`, but for some reason I keep coming back to `make`.
> or not GNU flavor not sure
On my macOS laptop
which make
# /usr/bin/make
/usr/bin/make --version
# GNU Make 3.81
> doesn’t support parallel tasks by default
I haven't verified this. I've checked `make --help` using the built-in make on my Mac (GNU Make 3.81), it mentions the `--jobs` option to run multiple jobs in parallel.
Checking the GNU Make manual it might me .NOTPARALLEL flag that wasn’t recognized correctly.
Cannot provide precise issues with (as it was more then a year) but I encountered 3-4 problems after having complete Makefile which was kind of downer because I had to rewrite it to support “vanilla” make.
In the end I dropped parallel went with Justfile and it took fraction of time (but truth be told I had dynamic PHONY targets which aren’t super easy to setup in Make).
> The reason to use $$(some-command) instead of the built-in function $(shell some-command) is that the expression will be evaluated every time it is called. [...] When using $(shell ...) the expression gets evaluated only once and reused across all the recipes.
I don't think the last sentence is true. As long as you define py using = and not := it will be evaluated every time it's used.
I've looked into pydoit multiple times and I would love to try it. There are a few reasons that make me go back to make (no pun intended!).
* It comes pre-installed on most UNIX operating systems. Yes, macOS ships an older version, but it works in most cases. This makes bootstrapping projects easier.
* I like how simple it's to get started with a Makefile when you just need a few tasks. Although I admit, big Makefiles can become hard to maintain.
I've recently experimented with using Makefiles as an ETL orchestrator, and it's becoming quite messy. I believe pydoit is the perfect replacement candidate. I just haven't had time to try it.
Edit:
Another thing I like about Makefiles is that you can keep the "hackiness" of shell scripts, manipulate the environment [0], etc. I guess you can do the same with other tools, but I find it easier to reason about those useful "hacks" in Makefiles.
> doit core features are quite stable. If there is no recent development, it does NOT mean the project is not being maintained... The project has 100% unit-test code coverage.
Okay, but support stops at Python 3.10. I have nothing against the tool - it looks interesting. But isn’t support of current versions of Python table stakes for a tool like this? How am I, as a potential user of this tool, supposed to interpret the lack of 3.11 support and a last release dated end of last year?
After the chaos of Python 2to3, backwards Python's backwards compatibility has been amazing. The tool is created for developers and claims 100% code coverage. Determining if it works with any specific version of Python would be a trivial exercise of clone the repo, make an venv, and run the tests.
This is something that a dev should do when evaluating a new dependency even when it claims to support the version relevant to you. Sadly, many do not.
I’ve used make like this in the past, it’s handy. But zsh auto suggestion obviates the extra maintenance IMO. I type the command one time and it’s in my history. So my common dev actions are more flexibly maintained by recency than editing a file.
You could say the Makefile also serves as a form of (not great) documentation.
I use Makefiles for managing multiple venvs for various jobs. For example running test and running in production have different requirements (no testing libraries). Or an environment for static analysis tools.
> I work using git worktrees to have multiple branches checked out at the same time. By using this trick, I can run different commands and make sure the imported package is from that branch, instead of having to python3 -m pip install -e . around all the packages
For most usecases, you can wire up your imports and script entry points in poetry, and manage venvs there too. Or just use pyenv-virtualenv if you’re on MacOS to auto-use your per-project venv.
Again, the author's requirements here are quite esoteric in my experience, and I’d really encourage folks to avoid this particular can of worms if at all possible.