Ollama, a popular local LLM tool, has been criticized for obscuring its reliance on llama.cpp, a C++ inference engine created by Georgi Gerganov, and for its own inferior performance and compatibility issues. Users are advised to move away from Ollama and use llama.cpp or other open-source alternatives that provide better performance, compatibility, and transparency.