Commit
·
e7fa81e
1
Parent(s):
71aece7
Add dataset README
Browse files
README.md
CHANGED
|
@@ -1,3 +1,317 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[![Slack][slack-badge]][slack-invite]
|
| 2 |
+
|
| 3 |
+
|
| 4 |
+
[slack-badge]: https://img.shields.io/badge/slack-chat-green.svg?logo=slack
|
| 5 |
+
[slack-invite]: https://join.slack.com/t/chime-fey5388/shared_invite/zt-1oha0gedv-JEUr1mSztR7~iK9AxM4HOA
|
| 6 |
+
|
| 7 |
+
# Introduction
|
| 8 |
+
Welcome to the "NOTSOFAR-1: Distant Meeting Transcription with a Single Device" Challenge.
|
| 9 |
+
|
| 10 |
+
This repo contains the baseline system code for the NOTSOFAR-1 Challenge.
|
| 11 |
+
|
| 12 |
+
- For more information about NOTSOFAR, visit [CHiME's official challenge website](https://www.chimechallenge.org/current/task2/index)
|
| 13 |
+
- [Register](https://www.chimechallenge.org/current/task2/submission) to participate.
|
| 14 |
+
- [Baseline system description](https://www.chimechallenge.org/current/task2/baseline).
|
| 15 |
+
- Contact us: join the `chime-8-notsofar` channel on the [CHiME Slack](https://join.slack.com/t/chime-fey5388/shared_invite/zt-1oha0gedv-JEUr1mSztR7~iK9AxM4HOA), or open a [GitHub issue](https://github.com/microsoft/NOTSOFAR1-Challenge/issues).
|
| 16 |
+
|
| 17 |
+
### 📊 Baseline Results on NOTSOFAR dev-set-1
|
| 18 |
+
|
| 19 |
+
Values are presented in `tcpWER / tcORC-WER (session count)` format.
|
| 20 |
+
<br>
|
| 21 |
+
As mentioned in the [official website](https://www.chimechallenge.org/current/task2/index#tracks),
|
| 22 |
+
systems are ranked based on the speaker-attributed
|
| 23 |
+
[tcpWER](https://github.com/fgnt/meeteval/blob/main/doc/tcpwer.md)
|
| 24 |
+
, while the speaker-agnostic [tcORC-WER](https://github.com/fgnt/meeteval) serves as a supplementary metric for analysis.
|
| 25 |
+
<br>
|
| 26 |
+
We include analysis based on a selection of hashtags from our [metadata](https://www.chimechallenge.org/current/task2/data#metadata), providing insights into how different conditions affect system performance.
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
| | Single-Channel | Multi-Channel |
|
| 31 |
+
|----------------------|-----------------------|-----------------------|
|
| 32 |
+
| All Sessions | **46.8** / 38.5 (177) | **32.4** / 26.7 (106) |
|
| 33 |
+
| #NaturalMeeting | 47.6 / 40.2 (30) | 32.3 / 26.2 (18) |
|
| 34 |
+
| #DebateOverlaps | 54.9 / 44.7 (39) | 38.0 / 31.4 (24) |
|
| 35 |
+
| #TurnsNoOverlap | 32.4 / 29.7 (10) | 21.2 / 18.8 (6) |
|
| 36 |
+
| #TransientNoise=high | 51.0 / 43.7 (10) | 33.6 / 29.1 (5) |
|
| 37 |
+
| #TalkNearWhiteboard | 55.4 / 43.9 (40) | 39.9 / 31.2 (22) |
|
| 38 |
+
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
# Project Setup
|
| 46 |
+
The following steps will guide you through setting up the project on your machine. <br>
|
| 47 |
+
|
| 48 |
+
### Windows Users
|
| 49 |
+
This project is compatible with **Linux** environments. Windows users can refer to [Docker](#docker) or
|
| 50 |
+
[Devcontainer](#devcontainer) sections. <br>
|
| 51 |
+
Alternatively, install WSL2 by following the [WSL2 Installation Guide](https://learn.microsoft.com/en-us/windows/wsl/install), then install Ubuntu 20.04 from the [Microsoft Store](https://www.microsoft.com/en-us/p/ubuntu-2004-lts/9n6svws3rx71?activetab=pivot:overviewtab). <br>
|
| 52 |
+
|
| 53 |
+
## Cloning the Repository
|
| 54 |
+
|
| 55 |
+
Clone the `NOTSOFAR1-Challenge` repository from GitHub. Open your terminal and run the following command:
|
| 56 |
+
|
| 57 |
+
```bash
|
| 58 |
+
sudo apt-get install git
|
| 59 |
+
cd path/to/your/projects/directory
|
| 60 |
+
git clone https://github.com/microsoft/NOTSOFAR1-Challenge.git
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
|
| 64 |
+
## Setting up the environment
|
| 65 |
+
|
| 66 |
+
### Conda
|
| 67 |
+
|
| 68 |
+
#### Step 1: Install Conda
|
| 69 |
+
|
| 70 |
+
Conda is a package manager that is used to install Python and other dependencies.<br>
|
| 71 |
+
To install Miniconda, which is a minimal version of Conda, run the following commands:
|
| 72 |
+
|
| 73 |
+
```bash
|
| 74 |
+
miniconda_dir="$HOME/miniconda3"
|
| 75 |
+
script="Miniconda3-latest-Linux-$(uname -m).sh"
|
| 76 |
+
wget --tries=3 "https://repo.anaconda.com/miniconda/${script}"
|
| 77 |
+
bash "${script}" -b -p "${miniconda_dir}"
|
| 78 |
+
export PATH="${miniconda_dir}/bin:$PATH"
|
| 79 |
+
````
|
| 80 |
+
*** You may change the `miniconda_dir` variable to install Miniconda in a different directory.
|
| 81 |
+
|
| 82 |
+
|
| 83 |
+
#### Step 2: Create a Conda Environment
|
| 84 |
+
|
| 85 |
+
Conda Environments are used to isolate Python dependencies. <br>
|
| 86 |
+
To set it up, run the following commands:
|
| 87 |
+
|
| 88 |
+
```bash
|
| 89 |
+
source "/path/to/conda/dir/etc/profile.d/conda.sh"
|
| 90 |
+
conda create --name notsofar python=3.10 -y
|
| 91 |
+
conda activate notsofar
|
| 92 |
+
cd /path/to/NOTSOFAR1-Challenge
|
| 93 |
+
python -m pip install --upgrade pip
|
| 94 |
+
pip install --upgrade setuptools wheel Cython fasttext-wheel
|
| 95 |
+
pip install -r requirements.txt
|
| 96 |
+
conda install ffmpeg -c conda-forge -y
|
| 97 |
+
```
|
| 98 |
+
|
| 99 |
+
### PIP
|
| 100 |
+
|
| 101 |
+
#### Step 1: Install Python 3.10
|
| 102 |
+
|
| 103 |
+
Python 3.10 is required to run the project. To install it, run the following commands:
|
| 104 |
+
|
| 105 |
+
```bash
|
| 106 |
+
sudo apt update && sudo apt upgrade
|
| 107 |
+
sudo add-apt-repository ppa:deadsnakes/ppa -y
|
| 108 |
+
sudo apt update
|
| 109 |
+
sudo apt install python3.10
|
| 110 |
+
```
|
| 111 |
+
|
| 112 |
+
#### Step 2: Set Up the Python Virtual Environment
|
| 113 |
+
|
| 114 |
+
Python virtual environments are used to isolate Python dependencies. <br>
|
| 115 |
+
To set it up, run the following commands:
|
| 116 |
+
|
| 117 |
+
```bash
|
| 118 |
+
sudo apt-get install python3.10-venv
|
| 119 |
+
python3.10 -m venv /path/to/virtualenvs/NOTSOFAR
|
| 120 |
+
source /path/to/virtualenvs/NOTSOFAR/bin/activate
|
| 121 |
+
```
|
| 122 |
+
|
| 123 |
+
#### Step 3: Install Python Dependencies
|
| 124 |
+
|
| 125 |
+
Navigate to the cloned repository and install the required Python dependencies:
|
| 126 |
+
|
| 127 |
+
```bash
|
| 128 |
+
cd /path/to/NOTSOFAR1-Challenge
|
| 129 |
+
python -m pip install --upgrade pip
|
| 130 |
+
pip install --upgrade setuptools wheel Cython fasttext-wheel
|
| 131 |
+
sudo apt-get install python3.10-dev ffmpeg build-essential
|
| 132 |
+
pip install -r requirements.txt
|
| 133 |
+
```
|
| 134 |
+
|
| 135 |
+
### Docker
|
| 136 |
+
|
| 137 |
+
Refer to the `Dockerfile` in the project's root for dependencies setup. To use Docker, ensure you have Docker installed on your system and configured to use Linux containers.
|
| 138 |
+
|
| 139 |
+
### Devcontainer
|
| 140 |
+
With the provided `devcontainer.json` you can run and work on the project in a [devctonainer](https://containers.dev/) using, for example, the [Dev Containers VSCode Extension](https://code.visualstudio.com/docs/devcontainers/containers).
|
| 141 |
+
|
| 142 |
+
|
| 143 |
+
# Running evaluation - the inference pipeline
|
| 144 |
+
The following command will download the **entire dev-set** of the recorded meeting dataset and run the inference pipeline
|
| 145 |
+
according to selected configuration. The default is configured to `--config-name dev_set_1_mc_debug` for quick debugging,
|
| 146 |
+
running on a single session with the Whisper 'tiny' model.
|
| 147 |
+
```bash
|
| 148 |
+
cd /path/to/NOTSOFAR1-Challenge
|
| 149 |
+
python run_inference.py
|
| 150 |
+
```
|
| 151 |
+
|
| 152 |
+
To run on all multi-channel or single-channel dev-set sessions, use the following commands respectively:
|
| 153 |
+
```bash
|
| 154 |
+
python run_inference.py --config-name full_dev_set_mc
|
| 155 |
+
python run_inference.py --config-name full_dev_set_sc
|
| 156 |
+
```
|
| 157 |
+
The first time `run_inference.py` runs, it will automatically download these required models and datasets from blob storage:
|
| 158 |
+
|
| 159 |
+
|
| 160 |
+
1. The development set of the meeting dataset (dev-set) will be stored in the `artifacts/meeting_data` directory.
|
| 161 |
+
2. The CSS models required to run the inference pipeline will be stored in the `artifacts/css_models` directory.
|
| 162 |
+
|
| 163 |
+
Outputs will be written to the `artifacts/outputs` directory.
|
| 164 |
+
|
| 165 |
+
|
| 166 |
+
|
| 167 |
+
The `session_query` argument found in the yaml config file (e.g. `configs/inference/inference_v1.yaml`) offers more control over filtering meetings.
|
| 168 |
+
Note that to submit results on the dev-set, you must evaluate on the full set (`full_dev_set_mc` or `full_dev_set_sc`) and no filtering must be performed.
|
| 169 |
+
|
| 170 |
+
|
| 171 |
+
# Integrating your own models
|
| 172 |
+
The inference pipeline is modular, designed for easy research and extension.
|
| 173 |
+
Begin by exploring the following components:
|
| 174 |
+
- **Continuous Speech Separation (CSS)**: See `css_inference` in `css.py` . We provide a model pre-trained on NOTSOFAR's simulated training dataset, as well as inference and training code. For more information, refer to the [CSS section](#running-css-continuous-speech-separation-training).
|
| 175 |
+
- **Automatic Speech Recognition (ASR)**: See `asr_inference` in `asr.py`. The baseline implementation relies on [Whisper](https://github.com/openai/whisper).
|
| 176 |
+
- **Speaker Diarization**: See `diarization_inference` in `diarization.py`. The baseline implementation relies on the [NeMo toolkit](https://github.com/NVIDIA/NeMo).
|
| 177 |
+
|
| 178 |
+
### Training datasets
|
| 179 |
+
For training and fine-tuning your models, NOTSOFAR offers the **simulated training set** and the training portion of the
|
| 180 |
+
**recorded meeting dataset**. Refer to the `download_simulated_subset` and `download_meeting_subset` functions in
|
| 181 |
+
[utils/azure_storage.py](https://github.com/microsoft/NOTSOFAR1-Challenge/blob/main/utils/azure_storage.py#L109),
|
| 182 |
+
or the [NOTSOFAR-1 Datasets](#notsofar-1-datasets---download-instructions) section.
|
| 183 |
+
|
| 184 |
+
|
| 185 |
+
# Running CSS (continuous speech separation) training
|
| 186 |
+
|
| 187 |
+
## 1. Local training on a data sample for development and debugging
|
| 188 |
+
The following command will run CSS training on the 10-second simulated training data sample in `sample_data/css_train_set`.
|
| 189 |
+
```bash
|
| 190 |
+
cd /path/to/NOTSOFAR1-Challenge
|
| 191 |
+
python run_training_css_local.py
|
| 192 |
+
```
|
| 193 |
+
|
| 194 |
+
## 2. Training on the full simulated training dataset
|
| 195 |
+
|
| 196 |
+
### Step 1: Download the simulated training dataset
|
| 197 |
+
You can use the `download_simulated_subset` function in
|
| 198 |
+
[utils/azure_storage.py](https://github.com/microsoft/NOTSOFAR1-Challenge/blob/main/utils/azure_storage.py)
|
| 199 |
+
to download the training dataset from blob storage.
|
| 200 |
+
You have the option to download either the complete dataset, comprising almost 1000 hours, or a smaller, 200-hour subset.
|
| 201 |
+
|
| 202 |
+
Examples:
|
| 203 |
+
```python
|
| 204 |
+
ver='v1.5' # this should point to the lateset and greatest version of the dataset.
|
| 205 |
+
|
| 206 |
+
# Option 1: Download the training and validation sets of the entire 1000-hour dataset.
|
| 207 |
+
train_set_path = download_simulated_subset(
|
| 208 |
+
version=ver, volume='1000hrs', subset_name='train', destination_dir=os.path.join(my_dir, 'train'))
|
| 209 |
+
|
| 210 |
+
val_set_path = download_simulated_subset(
|
| 211 |
+
version=ver, volume='1000hrs', subset_name='val', destination_dir=os.path.join(my_dir, 'val'))
|
| 212 |
+
|
| 213 |
+
|
| 214 |
+
# Option 2: Download the training and validation sets of the smaller 200-hour dataset.
|
| 215 |
+
train_set_path = download_simulated_subset(
|
| 216 |
+
version=ver, volume='200hrs', subset_name='train', destination_dir=os.path.join(my_dir, 'train'))
|
| 217 |
+
|
| 218 |
+
val_set_path = download_simulated_subset(
|
| 219 |
+
version=ver, volume='200hrs', subset_name='val', destination_dir=os.path.join(my_dir, 'val'))
|
| 220 |
+
```
|
| 221 |
+
|
| 222 |
+
### Step 2: Run CSS training
|
| 223 |
+
Once you have downloaded the training dataset, you can run CSS training on it using the `run_training_css` function in `css/training/train.py`.
|
| 224 |
+
The `main` function in `run_training_css.py` provides an entry point with `conf`, `data_root_in`, and `data_root_out` arguments that you can use to configure the run.
|
| 225 |
+
|
| 226 |
+
It is important to note that the setup and provisioning of a compute cloud environment for running this training process is the responsibility of the user. Our code is designed to support **PyTorch's Distributed Data Parallel (DDP)** framework. This means you can leverage multiple GPUs across several nodes efficiently.
|
| 227 |
+
|
| 228 |
+
### Step 3: Customizing the CSS model
|
| 229 |
+
To add a new CSS model, you need to do the following:
|
| 230 |
+
1. Have your model implement the same interface as our baseline CSS model class `ConformerCssWrapper` which is located
|
| 231 |
+
in `css/training/conformer_wrapper.py`. Note that in addition to the `forward` method, it must also implement the
|
| 232 |
+
`separate`, `stft`, and `istft` methods. The latter three methods will be used in the inference pipeline and to
|
| 233 |
+
calculate the loss when training.
|
| 234 |
+
2. Create a configuration dataclass for your model. Add it as a member of the `TrainCfg` dataclass in
|
| 235 |
+
`css/training/train.py`.
|
| 236 |
+
3. Add your model to the `get_model` function in `css/training/train.py`.
|
| 237 |
+
|
| 238 |
+
|
| 239 |
+
|
| 240 |
+
# NOTSOFAR-1 Datasets - Download Instructions
|
| 241 |
+
This section is for those specifically interested in downloading the NOTSOFAR datasets.<br>
|
| 242 |
+
The NOTSOFAR-1 Challenge provides two datasets: a recorded meeting dataset and a simulated training dataset. <br>
|
| 243 |
+
The datasets are stored in Azure Blob Storage, to download them, you will need to setup [AzCopy](https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10#download-azcopy).
|
| 244 |
+
|
| 245 |
+
You can use either the python utilities in `utils/azure_storage.py` or the `AzCopy` command to download the datasets as described below.
|
| 246 |
+
|
| 247 |
+
|
| 248 |
+
|
| 249 |
+
### Meeting Dataset for Benchmarking and Training
|
| 250 |
+
|
| 251 |
+
The NOTSOFAR-1 Recorded Meeting Dataset is a collection of 315 meetings, each averaging 6 minutes, recorded across 30 conference rooms with 4-8 attendees, featuring a total of 35 unique speakers. This dataset captures a broad spectrum of real-world acoustic conditions and conversational dynamics.
|
| 252 |
+
|
| 253 |
+
### Download
|
| 254 |
+
|
| 255 |
+
To download the dataset, you can call the python function `download_meeting_subset` within `utils/azure_storage.py`.
|
| 256 |
+
|
| 257 |
+
Alternatively, using AzCopy CLI, set these arguments and run the following command:
|
| 258 |
+
|
| 259 |
+
- `subset_name`: name of split to download (`dev_set` / `eval_set` / `train_set`).
|
| 260 |
+
- `version`: version to download (`240103g` / etc.). Use the latest version.
|
| 261 |
+
- `datasets_path` - path to the directory where you want to download the benchmarking dataset (destination directory must exist). <br>
|
| 262 |
+
|
| 263 |
+
Train, dev, and eval sets for the NOTSOFAR challenge are released in stages.
|
| 264 |
+
See release timeline on the [NOTSOFAR page](https://www.chimechallenge.org/current/task2/index#dates).
|
| 265 |
+
See doc in `download_meeting_subset` function in
|
| 266 |
+
[utils/azure_storage.py](https://github.com/microsoft/NOTSOFAR1-Challenge/blob/main/utils/azure_storage.py#L109)
|
| 267 |
+
for latest available versions.
|
| 268 |
+
|
| 269 |
+
```bash
|
| 270 |
+
azcopy copy https://notsofarsa.blob.core.windows.net/benchmark-datasets/<subset_name>/<version>/MTG <datasets_path>/benchmark --recursive
|
| 271 |
+
```
|
| 272 |
+
|
| 273 |
+
Example:
|
| 274 |
+
```bash
|
| 275 |
+
azcopy copy https://notsofarsa.blob.core.windows.net/benchmark-datasets/dev_set/240415.2_dev/MTG . --recursive
|
| 276 |
+
````
|
| 277 |
+
|
| 278 |
+
|
| 279 |
+
### Simulated Training Dataset
|
| 280 |
+
|
| 281 |
+
The NOTSOFAR-1 Training Dataset is a 1000-hour simulated training dataset, synthesized with enhanced authenticity for real-world generalization, incorporating 15,000 real acoustic transfer functions.
|
| 282 |
+
|
| 283 |
+
### Download
|
| 284 |
+
|
| 285 |
+
|
| 286 |
+
To download the dataset, you can call the python function `download_simulated_subset` within `utils/azure_storage.py`.
|
| 287 |
+
Alternatively, using AzCopy CLI, set these arguments and run the following command:
|
| 288 |
+
|
| 289 |
+
- `version`: version of the train data to download (`v1.1` / `v1.2` / `v1.3` / `1.4` / `1.5` / etc.).
|
| 290 |
+
See doc in `download_simulated_subset` function in `utils/azure_storage.py` for latest available versions.
|
| 291 |
+
- `volume` - volume of the train data to download (`200hrs` / `1000hrs`)
|
| 292 |
+
- `subset_name`: train data type to download (`train` / `val`)
|
| 293 |
+
- `datasets_path` - path to the directory where you want to download the simulated dataset (destination directory must exist). <br>
|
| 294 |
+
|
| 295 |
+
|
| 296 |
+
```bash
|
| 297 |
+
azcopy copy https://notsofarsa.blob.core.windows.net/css-datasets/<version>/<volume>/<subset_name> <datasets_path>/benchmark --recursive
|
| 298 |
+
```
|
| 299 |
+
|
| 300 |
+
Example:
|
| 301 |
+
```bash
|
| 302 |
+
azcopy copy https://notsofarsa.blob.core.windows.net/css-datasets/v1.5/200hrs/train . --recursive
|
| 303 |
+
```
|
| 304 |
+
|
| 305 |
+
|
| 306 |
+
## Data License
|
| 307 |
+
This public data is currently licensed for use exclusively in the NOTSOFAR challenge event.
|
| 308 |
+
We appreciate your understanding that it is not yet available for academic or commercial use.
|
| 309 |
+
However, we are actively working towards expanding its availability for these purposes.
|
| 310 |
+
We anticipate a forthcoming announcement that will enable broader and more impactful use of this data. Stay tuned for updates.
|
| 311 |
+
Thank you for your interest and patience.
|
| 312 |
+
|
| 313 |
+
|
| 314 |
+
# 🤝 Contribute
|
| 315 |
+
|
| 316 |
+
Please refer to our [contributing guide](CONTRIBUTING.md) for more information on how to contribute!
|
| 317 |
+
|