Update README.md
Browse files
README.md
CHANGED
|
@@ -46,13 +46,13 @@ The supervised training tasks datasets can be downloaded on [Link](https://www.d
|
|
| 46 |
|
| 47 |
### Transfer-learning Pretraining
|
| 48 |
|
| 49 |
-
The model was trained on a single TPU Pod V3-8 for
|
| 50 |
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
|
| 51 |
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
|
| 52 |
|
| 53 |
### Fine-tuning
|
| 54 |
|
| 55 |
-
This model was then fine-tuned on a single TPU Pod V3-8 for
|
| 56 |
|
| 57 |
|
| 58 |
## Evaluation results
|
|
|
|
| 46 |
|
| 47 |
### Transfer-learning Pretraining
|
| 48 |
|
| 49 |
+
The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).
|
| 50 |
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
|
| 51 |
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
|
| 52 |
|
| 53 |
### Fine-tuning
|
| 54 |
|
| 55 |
+
This model was then fine-tuned on a single TPU Pod V3-8 for 1,400,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data.
|
| 56 |
|
| 57 |
|
| 58 |
## Evaluation results
|