Fine-tuning
Last updated
Last updated
The following models are available to use with our fine-tuning API.
Training Precision Type indicates the precision type used during training for each model.
bf16 (bfloat 16): This uses bf16 for all weights. Some large models on our platform uses full bf16 training for better memory usage and training speed.
Model String for API | Context Length | Training Precision Type* | Batch Size Range | Price (1M Tokens) |
---|---|---|---|---|
For more models fine-tuning, please email us (support@netmind.ai).
Pricing for fine-tuning is based on model size, the number of training tokens, the number of validation tokens, the number of evaluations, and the number of epochs. In other words, the total number of tokens used in a job is n_epochs * n_tokens_per_dataset
.
For example, if you start a "meta-llama/Meta-Llama-3-8B-Instruct" fine-tuning job with 1M token and 1 epoch, the cost will be $0.48
meta-llama/Meta-Llama-3-8B-Instruct
8192
bf16
1-4
$0.48
meta-llama/Llama-3.1-8B-Instruct
8192
bf16
1-4
$0.48
Qwen/Qwen2.5-7B-Instruct
32768
bf16
1-2
$0.42