Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Didn’t research show that models get worse at translation the more languages get added in? The curse of multilinguality? Lauscher 2020?

It looks like meta found a way forward.

Reading meta’s abstract, it seems that they have found ways to improve the quality of the training data, and also new evaluation tools?

They are also saying that OMT-LLaMA does a better job at text generation than other baseline models.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: