Skip to content

In this paper, a novel dual-modal approach merges audio and textual data for emotion classification. By combining BERT and ModifiedAlexNet models, it surpasses single-modal models on IEMOCAP dataset. The work enhances human-computer interaction and inspires further research in multi-modal emotion recognition.

Notifications You must be signed in to change notification settings

malikbaqi12/A-Hybrid-Approach-to-Emotion-Classification-Using-Multimodal-Text-and-Audio-Data

Repository files navigation

A-Hybrid-Approach-to-Emotion-Classification-Using-Multimodal-Text-and-Audio-Data

In this paper, a novel dual-modal approach merges audio and textual data for emotion classification. By combining BERT and ModifiedAlexNet models, it surpasses single-modal models on IEMOCAP dataset. The work enhances human-computer interaction and inspires further research in multi-modal emotion recognition.

About

In this paper, a novel dual-modal approach merges audio and textual data for emotion classification. By combining BERT and ModifiedAlexNet models, it surpasses single-modal models on IEMOCAP dataset. The work enhances human-computer interaction and inspires further research in multi-modal emotion recognition.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published