Is R really that good for data cleaning and transformation? It's slow, single threaded (yes, even for a lot of real-world use-cases with data.table) and memory-hungry. People only ignore this because code is generally written from the top of a notebook down to the bottom without ever being re-run.
A popular counterpoint in the R community is that in many data cleaning tasks, the bottleneck is human understanding / coding time, not comptutation time. In other words, we'd rather spend 1 hour writing up a script that runs in 10 minutes and needs to be run a handful of times at most, than spend 6 hours writing something that takes 10 seconds.
Edit: This of course goes hand-in-hand with the claim that it is easier/faster to write R scripts. If you're not familiar with it, the tidyr and dplyr packages in particular (part of the tidyverse) are fantastic in the verbs they provide for thinking about data cleaning.
I have had this issue as well. Although to be fair, I would say this isn't the fault of R. Educators in the field of data science love notebooks because they can pair documentation, visualizations, and code all in one document. However, heavy reliance on notebooks produces a class of programmers who have very little clue how the code they are writing actually runs.
R has inbuilt great parallel tools (check for example the doSnow and future frameworks);
the best packages for data manipulation are mostly written in C (for example data.table and a good part of the tidyverse);
and with frameworks like Drake you can easilly create a Dag out of it that can process complex iterations millions of times. Check the uses of the Rcpp package that makes interfacing C code to R a breeze.
But of course, if you were comparing R to a pure compiled language, you are out of luck.