But why is that multi-step process even necessary?
There should be an quick command-line utility to concatenate multiple video files according to exactly the timestamps the user has provided. It's such a common operation.
There's no reason that the tool can't simply do a streaming decode of multiple different file formats and concatenate the video and sub-second precision. If input video resolutions are different, scaling the smaller video to the largest resolution is what the user almost always wants.
I get that FFMPEG is a "plumbing" CLI tool, but a "porcelain" wrapper would be amazing!
I understand that you want to do that, but any attempt to do so will be decidedly non-optimal due to how keyframes and lossy encoders work.
Even if your two files were encoded with x265 at exactly the same bitrate. It's a much more complicated problem than it appears at first glance, once you really dig into the command line options and encoding parameters of codecs like x264, x265 and vp9.
It's not as simple as concatenating two files together. You can also select down to per-frame precision using kdenlive and loading different x264,x265,vp8,vp9 files into it and cutting/editing them together. You will then need to re-encode the resulting output. kdenlive is ultimately a nice GUI front end on top of this:
When I ran into this, it came down to if I wanted to splice two videos together, or re-encode them.
Due to how keyframes work, cutting on keyframe boundaries is a lot faster and easier and doesn't require re-encoding in many cases. This is the default for the segment muxer.
Cutting between keyframes is a fair bit more effort, and requires re-encoding, which is why I guess it's not the default.
There should be an quick command-line utility to concatenate multiple video files according to exactly the timestamps the user has provided. It's such a common operation.
There's no reason that the tool can't simply do a streaming decode of multiple different file formats and concatenate the video and sub-second precision. If input video resolutions are different, scaling the smaller video to the largest resolution is what the user almost always wants.
I get that FFMPEG is a "plumbing" CLI tool, but a "porcelain" wrapper would be amazing!