It seems to be in vogue that RAG is one of the best solutions to reduce the problem of hallucinations in LLMs.What do you think? Are there any other alternatives or solutions on sight? #1 motivation for RAG: you want to use the LLM to provide answers about a specific domain. You want to not depend on the LLM's "world knowledge" (what was in its training data), either because your domain knowledge