Multi-task Sequence to Sequence Learning
By sequence to sequence model, we have three basic settings
- one-to-many settings
- many-to-one setting
- many-to-many settings
for multi-task learning.
have one encoder in common, such as translation and parsing and have multiple decoders
for example, input is a sequence of English words, a separate decoder can generate translation in German,tags in parsing and same sequence of English words.
many to one
only one decoder is shared, such as image captioning
multiple input encoder <=> multiple output decoder
autoencoder helps less in terms of perplexities but more on BLEU scores compared to skip-thought.
all of them are on machine translations