llama.cpp guide - Running LLMs locally, on any hardware, from scratch2024-10-28SteelPh0enix#llama.cpp #llm #ai #guide Psst, kid, want some cheap and small LLMs?Read more