Datasets:
audio audioduration (s) 5 5 |
|---|
YAML Metadata Warning:The task_categories "audio-to-text" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
PitchBench
A benchmark for testing what audio / acoustic signals Audio Language Models (ALMs) do and don't understand. PitchBench probes pitch perception across 28 controlled experiments — single-pitch ID, onsets/offsets, chords, sequences, contour, audio effects, and polyphonic streams.
Each row is one (audio, question, answer) triple: a short WAV stimulus, the
question (prompt*) asked of the model, and the ground-truth answer fields
(experiment-specific column names).
Quick start
from datasets import load_dataset
# Load one experiment (configurations match experiment IDs)
ds = load_dataset("vaclis/PitchBench", "pitchbench_a1_pitch_id", split="test")
print(ds[0]["audio"], ds[0]["prompt_midi"], ds[0]["midi"])
# Iterate every experiment
import datasets
for cfg in datasets.get_dataset_config_names("vaclis/PitchBench"):
ds = load_dataset("vaclis/PitchBench", cfg, split="test")
print(cfg, len(ds))
Experiments (28 configs, 5,115 total stimuli)
| Config | # rows | Question |
|---|---|---|
pitchbench_a1_pitch_id |
305 | Identify the pitch of a single tone. |
pitchbench_a2_pitch_with_reference |
150 | Identify a target pitch given a named reference tone. |
pitchbench_a3_pitch_by_duration |
700 | Pitch ID across very short to very long tone durations. |
pitchbench_a4_pitch_with_vibrato |
140 | Pitch ID with vibrato (rate × depth sweep). |
pitchbench_a5_pitch_slightly_off |
350 | Pitch ID when the tone is detuned by a fraction of a semitone. |
pitchbench_b1_pitch_in_silence |
140 | Pitch ID when the tone is hidden in a long silent stimulus. |
pitchbench_b2_onset_offset_single |
560 | Predict the onset/offset times of a single tone in silence. |
pitchbench_b3_onset_offset_specific |
30 | Predict onset/offset of a specific named target among distractors. |
pitchbench_b4_pitch_at_time |
75 | Identify which pitch is sounding at a given timestamp. |
pitchbench_b5_onset_offset_each |
30 | Predict onset/offset for every note in a sequence. |
pitchbench_c1_dyad_interval |
60 | Identify the interval (in semitones) of a two-note dyad. |
pitchbench_c2_chord_pitch_count |
80 | Count the number of simultaneous pitches in a chord. |
pitchbench_c3_chord_pitch_id |
100 | List every pitch in a chord (dyad / triad / seventh). |
pitchbench_c4_chord_quality |
50 | Classify chord quality (major, minor, dim, aug, 7th, sus, …). |
pitchbench_d1_seq_pitch_count |
70 | Count the number of distinct pitches in a sequential passage. |
pitchbench_d2_pitch_difference |
110 | Decide whether the second tone is higher or lower (cents-scale). |
pitchbench_d3_interval_id_seq |
120 | Identify the interval between two sequentially played pitches. |
pitchbench_d4_contour_discrete |
100 | Describe the up/down contour of a discrete-step melody. |
pitchbench_d5_contour_continuous |
20 | Describe the contour of a continuous pitch glide. |
pitchbench_d6_pitch_ranking |
40 | Rank N tones (small cents-scale differences) from low to high. |
pitchbench_d7_seq_pitch_id |
15 | Transcribe every pitch in a melodic sequence. |
pitchbench_e1_loudness |
420 | Pitch ID at varying loudness levels. |
pitchbench_e2_audio_effects |
490 | Pitch ID under audio effects (reverb, EQ, clip, saturation, …). |
pitchbench_e3_background_effects |
100 | Pitch ID embedded in real-world background noise (rain, crowd, …). |
pitchbench_e4_harmonic_saturation |
280 | Pitch ID under increasing harmonic-saturation drive. |
pitchbench_e5_time_stretch |
300 | Pitch ID with resample (pitch-shift) vs time-stretch (pitch preserved). |
pitchbench_g1_melodic_line_id |
255 | Identify the pitch sequence of one part within a polyphonic mix. |
pitchbench_g2_chorale_voice_id |
25 | Identify a target voice in a four-part Bach chorale rendering. |
Schema
Every row has:
file_name/audio— the WAV stimulus (16 kHz mono).promptor one or more ofprompt_midi,prompt_spn,prompt_abc,prompt_doremi,prompt_hz— the question(s) put to the model.- Experiment-specific ground-truth fields (e.g.
midi,n,interval_st,chord_quality_gt,pattern_gt,traj_name,midi_sequence, …).
Reproducibility
Stimuli are generated deterministically from the configuration in
pitchbench.config (EVAL=True benchmark constants). The subset published
here is the seeded stratified sample used in the paper — reproduced by
apply_default_sampling(EXP_NAME, all_conds, None, seed=42) over the output
of each experiment's build_conditions(...). Source code lives at
github/PitchBench (insert the canonical link before
public release).
License
Released under CC-BY-4.0.
Citation
@misc{pitchbench2026,
title = {PitchBench: A Benchmark for Pitch Understanding in Audio Language Models},
author = {<authors>},
year = {2026},
}
- Downloads last month
- -