📰 Check out our newest EACL 2023 Work
Read moreAn Artificial Intelligence Hub where you can read about our new state-of-the-art Deep Learning works.
We present ViT5, a pretrained Transformer-based encoder-decoder model for the Vietnamese language that achieves state-of-the-art results on Vietnamese Text Summarization
We present ViT5, a pretrained Transformer-based encoder-decoder model for the Vietnamese language that achieves state-of-the-art results on Vietnamese Text Summarization
We introduce our second release of VietAI’s MTet project, which stands for Multi-domain Translation for English and VieTnamese. With this release, we further improved on the first-ever multi-domain English-Vietnamese translation dataset at scale to release up to 4.2M examples across 11 domains. In addition, we demonstrated state-of-the-art results on IWSLT’15 (+3.5 BLEU for English-Vietnamese).
Our Researchers and Students are working on a wide-range of topics in AI
We present ViT5, a pretrained Transformer-based encoder-decoder model for the Vietnamese language that achieves state-of-the-art results on Vietnamese Text Summarization
We introduce our second release of VietAI’s MTet project, which stands for Multi-domain Translation for English and VieTnamese. With this release, we further improved on the first-ever multi-domain English-Vietnamese translation dataset at scale to release up to 4.2M examples across 11 domains. In addition, we demonstrated state-of-the-art results on IWSLT’15 (+3.5 BLEU for English-Vietnamese).
We present ViT5, a pretrained Transformer-based encoder-decoder model for the Vietnamese language that achieves state-of-the-art results on Vietnamese Text Summarization