Having too much parameters in your model, so that all of sample/training data is preserved perfectly, is usually considered a bad thing (overfitting).
But you're right - if dataset is exhaustive and finite, and model is large enough to preserve it perfectly - such overfitted model would work just fine, even if it's unlikely to be a particularly efficient way to build it.
But you're right - if dataset is exhaustive and finite, and model is large enough to preserve it perfectly - such overfitted model would work just fine, even if it's unlikely to be a particularly efficient way to build it.