Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

GGML is another neat ML abstraction layer, but I don't think much work has been dedicated to the Windows port.


GGML still runs on llama.cpp, and that still requires CUDA to be installed, unfortunately. I saw a PR for DirectML, but I'm not really holding my breath.


You don't have to install the whole CUDA. They have a redistributable.


Oh, I can't believe I missed that! That makes whisper.cpp and llama.cpp valid options if the user has Nvidia, thanks.


Whisper.cpp and llama.cpp also work with Vulkan.


Yeah, I researched this and I absolutely missed this whole part. To my defense I looked into this in 2023 which is ages ago :) Looks like local models are getting much more mature.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: