- Corpus ID: 5723167
Evaluating Neural Machine Translation in English-Japanese Task
@inproceedings{Zhu2015EvaluatingNM, title={Evaluating Neural Machine Translation in English-Japanese Task}, author={Zhongyuan Zhu}, booktitle={Workshop on Asian Translation}, year={2015}, url={https://api.semanticscholar.org/CorpusID:5723167} }
- Zhongyuan Zhu
- Published in Workshop on Asian Translation 2015
- Computer Science
A simple workaround to find missing translations with a back-off sys-tem is described and a specific error pattern in NMT translations which omits some information and thus fail to preserve the complete meaning is demonstrated.
12 Citations
Tree-to-Sequence Attentional Neural Machine Translation
- Akiko EriguchiKazuma HashimotoYoshimasa Tsuruoka
- Computer Science, Linguistics
- 2016
This work proposes a novel end-to-end syntactic NMT model, extending a sequence- to-sequence model with the source-side phrase structure, which has an attention mechanism that enables the decoder to generate a translated word while softly aligning it with phrases as well as words of the source sentence.
Neural Machine Translation with Source-Side Latent Graph Parsing
- Kazuma HashimotoYoshimasa Tsuruoka
- Computer Science
- 2017
This paper presents a novel neural machine translation model which jointly learns translation and source-side latent graph representations of sentences which significantly outperforms the previous best models on the standard English-to-Japanese translation dataset.
Incorporating Source-Side Phrase Structures into Neural Machine Translation
- Akiko EriguchiKazuma HashimotoYoshimasa Tsuruoka
- Computer Science, Linguistics
- 2019
This model has an attention mechanism that enables the decoder to generate a translated word while softly aligning it with phrases as well as words of the source sentence, and is called a tree-to-sequence NMT model, extending a sequence- to-sequence model with the source-side phrase structure.
Incorporating Source-Side Phrase Structures into Neural Machine Translation
- Akiko EriguchiKazuma HashimotoYoshimasa Tsuruoka
- Computer Science, Linguistics
- 2019
This model has an attention mechanism that enables the decoder to generate a translated word while softly aligning it with phrases as well as words of the source sentence, and is called a tree-to-sequence NMT model, extending a sequence- to-sequence model with the source-side phrase structure.
Pre-Reordering for Neural Machine Translation: Helpful or Harmful?
Factored NMT using SMT-based pre-reordering features on Japanese→English and Chinese→English is beneficial and can further improve by 4.48 and 5.89 relative BLEU points, respectively, compared to the baseline NMT system.
Character-based Decoding in Tree-to-Sequence Attention-based Neural Machine Translation
- Akiko EriguchiKazuma HashimotoYoshimasa Tsuruoka
- Computer Science, LinguisticsWAT@COLING
- 2016
This paper reports the systems (UT-AKY) submitted in the 3rd Workshop of Asian Translation 2016 and their results in the English-to-Japanese translation task, confirming that the character-based decoder can cover almost the full vocabulary in the _target language and generate translations much faster than the word-based model.
Neural Machine Translation on Myanmar Language
- H. LwinThinn Thinn Wai
- Computer Science, LinguisticsICO
- 2019
Recurrent neural network Encoder-Decoder architecture with attention mechanism is implemented to evaluate BLEU score on Myanmar English translation results, and batch size 64 can give better BLEu score than other batch sizes from experimental results.
WORD LEVEL ENGLISH TO HINDI NEURAL MACHINE TRANSLATION
- Nomi BaruahAurangzeb Khan Arjun Gogoi
- Computer Science, Linguistics
- 2021
NMT is one of the most recent and effective translation technique amongst all existing machine translation systems and outperforms all the others, according to human evaluation of the systems.
Understanding Pre-Editing for Black-Box Neural Machine Translation
- Rei MiyataAtsushi Fujita
- Computer Science, Linguistics
- 2021
Pre-editing is the process of modifying the source text (ST) so that it can be translated by machine translation (MT) in a better quality. Despite the unpredictability of black-box neural MT (NMT),…
Overview of the 1st Workshop on Asian Translation
- Toshiaki NakazawaHideya MinoIsao GotoS. KurohashiE. Sumita
- Linguistics
- 2014
The results of the shared tasks from the 3rd workshop on Asian translation (WAT2016) including J ↓ E, J ↔ C scientific paper translation subtasks, C ↓ J, K ↔ J, E ↔ E patent translation subt tasks, I ↓ newswire subtasks and H ↔E, H ↓ mixed domain subtasks are presented.
...
...
20 References
Addressing the Rare Word Problem in Neural Machine Translation
- Thang LuongI. SutskeverQuoc V. LeO. VinyalsWojciech Zaremba
- Computer Science
- 2015
This paper proposes and implements an effective technique to address the problem of end-to-end neural machine translation's inability to correctly translate very rare words, and is the first to surpass the best result achieved on a WMT’14 contest task.
Recurrent Continuous Translation Models
- Nal KalchbrennerPhil Blunsom
- Computer Science
- 2013
We introduce a class of probabilistic continuous translation models called Recurrent Continuous Translation Models that are purely based on continuous representations for words, phrases and sentences…
Neural Machine Translation by Jointly Learning to Align and Translate
- Dzmitry BahdanauKyunghyun ChoYoshua Bengio
- Computer Science
- 2015
It is conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and it is proposed to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a _target word, without having to form these parts as a hard segment explicitly.
Decoder Integration and Expected BLEU Training for Recurrent Neural Network Language Models
- Michael AuliJianfeng Gao
- Computer Science
- 2014
This work shows how a recurrent neural network language model can be optimized towards an expected BLEU loss instead of the usual cross-entropy criterion, and tackles the issue of directly integrating a recurrent network into firstpass decoding under an efficient approximation.
On Using Very Large _target Vocabulary for Neural Machine Translation
- Sébastien JeanKyunghyun ChoR. MemisevicYoshua Bengio
- Computer Science
- 2015
It is shown that decoding can be efficiently done even with the model having a very large _target vocabulary by selecting only a small subset of the whole _target vocabulary.
Weblio Pre-reordering Statistical Machine Translation System
- Zhongyuan Zhu
- Computer Science, Linguistics
- 2014
This system applied the pre-reordering method described in (Zhu et al., 2014), and extended the model to obtain N -best pre-reordering results, and utilized N -best parse trees simultaneously to explore the potential improvement for pre-reordering system with forest input.
Sequence to Sequence Learning with Neural Networks
- I. SutskeverO. VinyalsQuoc V. Le
- Computer Science
- 2014
This paper presents a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure, and finds that reversing the order of the words in all source sentences improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the _target sentence which made the optimization problem easier.
On the Properties of Neural Machine Translation: Encoder–Decoder Approaches
- Kyunghyun ChoB. V. MerrienboerDzmitry BahdanauYoshua Bengio
- Computer ScienceSSST@EMNLP
- 2014
It is shown that the neural machine translation performs relatively well on short sentences without unknown words, but its performance degrades rapidly as the length of the sentence and the number of unknown words increase.
Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation
- Kyunghyun ChoB. V. Merrienboer Yoshua Bengio
- Computer Science
- 2014
Qualitatively, the proposed RNN Encoder‐Decoder model learns a semantically and syntactically meaningful representation of linguistic phrases.
Overview of the 1st Workshop on Asian Translation
- Toshiaki NakazawaHideya MinoIsao GotoS. KurohashiE. Sumita
- Linguistics
- 2014
The results of the shared tasks from the 3rd workshop on Asian translation (WAT2016) including J ↓ E, J ↔ C scientific paper translation subtasks, C ↓ J, K ↔ J, E ↔ E patent translation subt tasks, I ↓ newswire subtasks and H ↔E, H ↓ mixed domain subtasks are presented.
...
...
Related Papers
Showing 1 through 3 of 0 Related Papers