This paper proposes a sequential framework that progressively extracts the features of music and characterizes music-induced emotions in a predetermined emotion plane to trace the real-time emotion locus of music. To build-up the emotion plane, 192 clips of emotion-predefined music are used to train the system. Five feature sets, including onset intensity, timbre, sound volume, mode and dissonance are extracted from WAV file to represent the characteristics of a music clip. Feature-weighted scoring algorithms continuously mark the feature-related emotion locus on the emotion plane. A Gaussian mixture model (GMM) is used to demarcate the boundaries of "Exuberance", "Contentment", "Anxious", and "Depression" on the emotion plane for trained music data. A graphic interface of emotion arousal locus on two-dimensional model of mood is established to represent the tracking of dynamic emotional transition caused by music. Preliminary evaluation of the system by testing music draws the locus of emotions evoked by music audio signals.