README.md Sequence-to-Sequence with Attention Model for Text Summarization. Authors: Xin Pan ( xpan@google.com , github:panyx0718), Peter Liu ( peterjliu@google.com ) Introduction The core model is the traditional seqeuence-to-sequence model with attention. It is customized (mostly inputs/outputs) for the text summarization task. The model has been trained on Gigaword dataset and achieved state-of