Khmer Orthographic Correction System using NLLB

This model is a fine-tuned version of facebook/mbart-large-50 on the khmer-orthography-correction-dataset dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1823
  • Cer: 0.0473
  • Wer: 0.3923
  • Bleu: {'score': 60.71026653645501, 'counts': [4985, 558, 166, 33], 'totals': [7537, 805, 267, 63], 'precisions': [66.14037415417275, 69.3167701863354, 62.172284644194754, 52.38095238095238], 'bp': 0.9766598710135985, 'sys_len': 7537, 'ref_len': 7715}

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer Wer Bleu
0.9311 1.0 842 0.4968 0.1203 0.7082 {'score': 34.801638642481215, 'counts': [2845, 373, 91, 19], 'totals': [7609, 877, 277, 64], 'precisions': [37.389932974109605, 42.53135689851767, 32.851985559566785, 29.6875], 'bp': 0.9861657142243254, 'sys_len': 7609, 'ref_len': 7715}
0.4966 2.0 1684 0.3594 0.0898 0.6000 {'score': 44.88058034913873, 'counts': [3561, 447, 122, 24], 'totals': [7557, 825, 269, 63], 'precisions': [47.12187375942835, 54.18181818181818, 45.353159851301115, 38.095238095238095], 'bp': 0.9793092844178501, 'sys_len': 7557, 'ref_len': 7715}
0.3655 3.0 2526 0.2919 0.0767 0.5307 {'score': 50.98256581813308, 'counts': [4034, 487, 138, 29], 'totals': [7549, 817, 270, 64], 'precisions': [53.437541396211415, 59.608323133414935, 51.111111111111114, 45.3125], 'bp': 0.9782503427651501, 'sys_len': 7549, 'ref_len': 7715}
0.327 4.0 3368 0.2529 0.0657 0.4862 {'score': 53.914915512375295, 'counts': [4343, 511, 146, 30], 'totals': [7544, 812, 268, 64], 'precisions': [57.56892895015907, 62.93103448275862, 54.47761194029851, 46.875], 'bp': 0.977587946673324, 'sys_len': 7544, 'ref_len': 7715}
0.2712 5.0 4210 0.2253 0.0591 0.4484 {'score': 55.04966846293549, 'counts': [4604, 528, 152, 30], 'totals': [7547, 815, 272, 66], 'precisions': [61.00437259838346, 64.78527607361963, 55.88235294117647, 45.45454545454545], 'bp': 0.9779854358166019, 'sys_len': 7547, 'ref_len': 7715}
0.2406 6.0 5052 0.2079 0.0530 0.4289 {'score': 58.09076687466732, 'counts': [4731, 536, 158, 32], 'totals': [7537, 805, 268, 63], 'precisions': [62.7703330237495, 66.58385093167702, 58.95522388059702, 50.79365079365079], 'bp': 0.9766598710135985, 'sys_len': 7537, 'ref_len': 7715}
0.2243 7.0 5894 0.1962 0.0508 0.4118 {'score': 59.13761418304522, 'counts': [4851, 546, 162, 32], 'totals': [7538, 806, 267, 63], 'precisions': [64.35394003714514, 67.74193548387096, 60.674157303370784, 50.79365079365079], 'bp': 0.976792504783367, 'sys_len': 7538, 'ref_len': 7715}
0.2067 8.0 6736 0.1879 0.0482 0.3997 {'score': 60.725787252253205, 'counts': [4939, 556, 167, 34], 'totals': [7540, 808, 268, 64], 'precisions': [65.50397877984085, 68.81188118811882, 62.3134328358209, 53.125], 'bp': 0.977057720781772, 'sys_len': 7540, 'ref_len': 7715}
0.1962 9.0 7578 0.1841 0.0480 0.3939 {'score': 60.53730944860592, 'counts': [4972, 556, 165, 33], 'totals': [7536, 804, 267, 63], 'precisions': [65.97664543524417, 69.1542288557214, 61.79775280898876, 52.38095238095238], 'bp': 0.9765272200606311, 'sys_len': 7536, 'ref_len': 7715}
0.194 10.0 8420 0.1823 0.0473 0.3923 {'score': 60.71026653645501, 'counts': [4985, 558, 166, 33], 'totals': [7537, 805, 267, 63], 'precisions': [66.14037415417275, 69.3167701863354, 62.172284644194754, 52.38095238095238], 'bp': 0.9766598710135985, 'sys_len': 7537, 'ref_len': 7715}

Framework versions

  • Transformers 4.57.2
  • Pytorch 2.9.0+cu126
  • Datasets 4.0.0
  • Tokenizers 0.22.1
Downloads last month
19
Safetensors
Model size
0.6B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for S-Sethisak/KOCS_NLLB

Finetuned
(216)
this model

Evaluation results

  • Wer on khmer-orthography-correction-dataset
    self-reported
    0.392
  • Bleu on khmer-orthography-correction-dataset
    self-reported
    [object Object]