I think it will always be hard to justify. The amount of different configurations proliferate very quickly, so you end up spending a lot of die area etc on parts that much of the market doesn't need.
Intel and AMD seem to have settled on sticking a relatively small (but still significant die-wise) GPU, sufficient for 95% of consumers. They can't justify spending the die area needed for a large, powerful GPU that a relatively small segment of the market wants (and even then there's 2-3 cores from low end to high end, just for the GPU)
To achieve the same market for today you're looking at 2x CPU dies and 3x GPU dies, or at least 6 different combinations of CPU and GPU, and you still have loads of people who don't want the GPU at all. It just doesn't make sense to fully integrate them unless die area is so cheap that power consumption is the main limiting factor (the big justification for lots of fixed function hardware - but that's all way smaller than a GPU
>Intel and AMD seem to have settled on sticking a relatively small (but still significant die-wise) GPU
Intel yes, but AMD only finally began integrating a GPU into most of their CPU line up with the latest Zen 4 generation.
>a large, powerful [integrated] GPU that a relatively small segment of the market wants
I think you're underselling the demand there, particularly if the M* line of CPUs with iGPUs from Apple are anything to go by.
Like I said before, people want powerful graphics processing at an affordable (FSVO affordable with regards to Apple) price.
>and you still have loads of people who don't want the GPU at all.
Intel's been doing iGPUs for well over a decade now because "loads of people" do want iGPUs.
Enthusiasts and professionals alike want them because they are a Lowest Common Denominator fallback for when shit hits the fan. People on a budget (whether monetary or electric) want them regardless of their poor performance because they need a display output, especially in recent years.
And even if you have a discrete GPU anyway, an integrated GPU is still useful to have for offloading certain tasks.
>It just doesn't make sense to fully integrate them unless die area is so cheap that power consumption is the main limiting factor
Five years ago I would have completely agreed with you, but with how bloody expensive discrete GPUs are now I feel there's a vacated market segment where integrated GPUs with sufficient performance and reasonable pricing can come in to conquer and dominate; not unlike how discrete sound cards became extinct when integrated sound cards became Good Enough(tm).
They are different beasts for different tasks. Basically, GPUs are optimized for calculating small stuff in parallel (polygons). CPUs calculate big stuff serially, until the advent of multi-core obviously, but GPUs have much more parallel pipelines.
Intel and AMD seem to have settled on sticking a relatively small (but still significant die-wise) GPU, sufficient for 95% of consumers. They can't justify spending the die area needed for a large, powerful GPU that a relatively small segment of the market wants (and even then there's 2-3 cores from low end to high end, just for the GPU)
To achieve the same market for today you're looking at 2x CPU dies and 3x GPU dies, or at least 6 different combinations of CPU and GPU, and you still have loads of people who don't want the GPU at all. It just doesn't make sense to fully integrate them unless die area is so cheap that power consumption is the main limiting factor (the big justification for lots of fixed function hardware - but that's all way smaller than a GPU