|
|
| # IEMOCAP with Curriculum Learning Metrics |
|
|
| This dataset enhances the original IEMO_WAV_Diff_2 dataset with inter-evaluator agreement metrics |
| for curriculum learning following Lotfian & Busso (2019). |
| |
| ## Additional Columns |
| |
| - `curriculum_order`: Training order (1=highest agreement, train first) |
| - `overall_agreement`: Combined agreement score (0-1, higher is better) |
| - `fleiss_kappa`: Categorical agreement (-1 to 1, higher is better) |
| - `krippendorff_alpha`: Krippendorff's alpha for categorical reliability |
| - `valence_std`, `arousal_std`, `dominance_std`: Standard deviation of dimensional ratings (lower is better) |
| - `valence_icc`, `arousal_icc`, `dominance_icc`: Intraclass correlation coefficients (0-1, higher is better) |
| - `n_categorical_evaluators`, `n_dimensional_evaluators`: Number of evaluators |
| - `consensus_valence`, `consensus_arousal`, `consensus_dominance`: Consensus dimensional ratings |
|
|
| ## Usage for Curriculum Learning |
|
|
| Sort samples by `curriculum_order` and train on high-agreement samples first: |
|
|
| ```python |
| from datasets import load_dataset |
| |
| dataset = load_dataset("cairocode/MSPI_WAV_Diff_Curriculum") |
| train_data = dataset["train"].sort("curriculum_order") |
| |
| # Start with high agreement samples |
| easy_samples = train_data.filter(lambda x: x["overall_agreement"] > 0.5) |
| hard_samples = train_data.filter(lambda x: x["overall_agreement"] < 0.5) |
| ``` |
|
|
| ## Citation |
|
|
| If you use this dataset, please cite: |
|
|
| - Original IEMOCAP: Busso et al. (2008) |
| - Curriculum learning approach: Lotfian & Busso (2019) |
| - Original dataset: cairocode/IEMO_WAV_Diff_2 |
| |