microchip멀티모달 모델 학습 시 권장 옵션

  • 권장 VRAM [출처]arrow-up-right

    • 전체 LLM 학습시: 8x 32G/40G

    • LoRA 사용시 2x 32G/40G

  • 권장 하이퍼 파라미터

{
  "max_seq_length": 1024,
  "num_train_epochs": 1,
  "per_device_train_batch_size": 1,
  "learning_rate": 0.00004,
  "log_level": "warning",
  "logging_dir": "./logs",
  "logging_strategy": "no",
  "logging_first_step": 1,
  "logging_steps": 10,
  "fp16": 0,
  "bf16": 1,
  "seed": 42,
  "conv_style": "Hermes-2",
  "force_image_size": 448,
  "max_dynamic_patch": 6,
  "down_sample_ratio": 0.5,
  "drop_path_rate": 0,
  "freeze_llm": true,
  "freeze_mlp": true,
  "freeze_backbone": false,
  "use_llm_lora": 16,
  "vision_select_layer": -1,
  "dataloader_num_workers": 4,
  "save_total_limit": 1,
  "weight_decay": 0.05,
  "warmup_ratio": 0.03,
  "lr_scheduler_type": "cosine",
  "do_train": true,
  "grad_checkpoint": true,
  "group_by_length": true,
  "use_thumbnail": true,
  "ps_version": "v2",
  "eval_ratio": 0.1
}
  • 권장 학습 설정 정보

Last updated

Was this helpful?