ChatGPT has a language problem — but science can fix it
AIs built on Large Language Models have wowed by producing particularly fluent text. However, their ability to do this is limited in many languages. As the data and resources used to train a model in a specific language drops, so does the performance of the model, meaning that for some languages the AIs are effectively useless.
Researchers are aware of this problem and are trying to find solutions, but the challenge extends far beyond just the technical, with moral and social questions to be answered. This podcast explores how Large Language Models could be improved in more languages and the issues that could be caused if they are not.
Watch our related video of people trying out ChatGPT in different languages.
Hosted on Acast. See acast.com/privacy for more information.
Create your
podcast in
minutes
It is Free