When someone is able to demonstrate they are more efficient than me in my areas of research and expertise by utilizing an LLM, I will at least pay attention to their workflow and investigate ways to replicate it with FOSS tools I can host, improve, debug, and control for myself.
Until then, LLMs are of no interest to me. I am pretty good at using grep on local data caches.
Yup, reasonable enough. In areas where you are a true expert (i.e. where there are only handfuls at most of people like you, or where all expert knowledge is not online and accessible by LLMs), you essentially already know where to look, and how to most effectively phrase searches, so a generic LLM that automatically crawls common databases and sites is going to be useless, almost by definition / necessity. Training is just fitting a manifold to training data (interpolating and extrapolating), but truly expert domains (as defined earlier) don't generally have enough data for these things to learn helpful semantics.
Perhaps a model fine-tuned or specifically built for the niche domains might help non-experts and people learning to better do semantic / common-language searches, and this might accelerate learning. But, it might not. Not all domains have their core knowledge encoded in documents and/or language, and summarization is sometimes only harmful (especially for highly technical writing).
It also seems to me almost definitional though that LLMs aren't going to beat true expertise, because LLMs aren't true intelligence or GAI, they really do just estimate token path densities / curve-fit. And real understanding is clearly more than that.
Until then, LLMs are of no interest to me. I am pretty good at using grep on local data caches.