In this paper, a novel dual-modal approach merges audio and textual data for emotion classification. By combining BERT and ModifiedAlexNet models, it surpasses single-modal models on IEMOCAP dataset. The work enhances human-computer interaction and inspires further research in multi-modal emotion recognition.
-
Notifications
You must be signed in to change notification settings - Fork 0
In this paper, a novel dual-modal approach merges audio and textual data for emotion classification. By combining BERT and ModifiedAlexNet models, it surpasses single-modal models on IEMOCAP dataset. The work enhances human-computer interaction and inspires further research in multi-modal emotion recognition.
malikbaqi12/A-Hybrid-Approach-to-Emotion-Classification-Using-Multimodal-Text-and-Audio-Data
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
In this paper, a novel dual-modal approach merges audio and textual data for emotion classification. By combining BERT and ModifiedAlexNet models, it surpasses single-modal models on IEMOCAP dataset. The work enhances human-computer interaction and inspires further research in multi-modal emotion recognition.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published