Learning both domain mapping and domain knowledge is crucial for different sequence-to-sequence (seq2seq) tasks. Traditionally, seq2seq model only characterized domain mapping while the knowledge in source and target domains was ignored. To strengthen seq2seq representation, this study presents a unified transformer for bidirectional domain mapping where collaborative regularization is imposed. This regularization enforces the bidirectional mapping constraint and avoids the model from overfitting for better generalization. Importantly, the unified learning objective is optimized for collaborative learning among different modules in two domains with two learning directions. Experiments on machine translation demonstrate the merit of unified transformer by comparing with the existing methods under different tasks and settings.