This paper proposes a sequential framework to explore the possibility of human form robots motion rendering models to express emotions inspired by the mechanisms of real-time emotional locus of music signals. The music emotion system progressively extracts the features of music and characterizes music-induced emotions in an emotion plane to trace the real-time emotion locus of music. Five feature sets are extracted from the WAV file of music. Feature-weighted scoring algorithms continuously mark the trajectory on the emotion plane. The boundaries of four emotions are demarcated by Gaussian mixture model. A graphic interface represents the tracking of dynamic emotional locus. The music emotion locus and robot movement are integrated and analyzed by the modified Laban movement analysis. The robot controller organized with multi-modal whole-body awareness of music emotions gave rise to robot's autonomous locomotion.