I don't think anyone really knows, but I also don't think it's quite an either/or. To me a more interesting way to put the question is to ask what it would mean to say that GPT-5 is just applying patterns from its training data when it finds bugs in 1000 lines of new Rust code that were missed by multiple human reviewers. "Applying a memorized pattern" seems well-defined because it is an everyday concept but I don't think it really is well-defined. If the bug "fits a pattern" but is expressed in a different programming language, with different variable names, different context, etc., recognizing that and applying the pattern doesn't seem to me like a merely mechanical process.
Kant has an argument in the Critique of Pure Reason that reason cannot be reducible to the application of rules, because in order to apply rule A to a situation, you would need a rule B to follow for applying rule A, and a rule C for applying rule B, and this is an infinite regress. I think the same is true here: any reasonable characterization of "applying a pattern" that would succeed at reducing what LLMs do to something mechanical is vulnerable to the regress argument.
In short: even if you want to say it's pattern matching, retrieving a pattern and applying it requires something a lot closer to intelligence than the phrase makes it sound.
Kant has an argument in the Critique of Pure Reason that reason cannot be reducible to the application of rules, because in order to apply rule A to a situation, you would need a rule B to follow for applying rule A, and a rule C for applying rule B, and this is an infinite regress. I think the same is true here: any reasonable characterization of "applying a pattern" that would succeed at reducing what LLMs do to something mechanical is vulnerable to the regress argument.
In short: even if you want to say it's pattern matching, retrieving a pattern and applying it requires something a lot closer to intelligence than the phrase makes it sound.