Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How about expense? LLMs do dramatically more computations doing simple tasks, and only run on relatively exotic, expensive hardware. You have to trust an LLM provider, and keep paying them.

If a traditional NLP solution can run under your control, and tackle the task at hand, it can be plainly much cheaper at scale.



thats absurd, there are thousands of open-source LLMs you can run yourself of all shapes and sizes


Are many of them comparable to Claude Sonnet or GPT-5? What kind of hardware do they require?


None of them of course. But the point is that even smaller open-source "LLMs" (more specifically transformer architectures) you can run anywhere yourself outperform these "traditional" pipelines with less compute. I would say that its not well defined what exactly "traditional" even means here though, since I wouldn't really even describe CNN/BiLSTMs as "traditional", in my mind that would be SpaCy <2.0 and NLTK (linear models SVMs/TF-IDF, Word2Vec/Glove/fastText/etc. etc.), LLMs are at least 2 generations ahead of those since there was the whole "deep learning" craze inbetween.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: