How to Optimize Hyperparameters for LLMs Fine-Tuning with Kubeflow

API description of LLMs Hyperparameter Optimization using Katib

This page describes how to optimize hyperparameters (HPs) in the process of fine-tuning large language models (LLMs) using Katib’s Python API and how to configure it.

Prerequisites

You need to install the following Katib components to run code in this guide:

  • Training Operator control plane – install
  • Katib control plane – install
  • Katib Python SDK with LLM Hyperparameter Optimization Support – install

Note: If you choose to define your own custom objective function and optimize parameters within it, distributed training is currently not supported. In this case, installing the Training Operator control plane is not necessary. For detailed instructions, please refer to this guide.

Note: Ensure that the Training Operator control plane is installed prior to the Katib control plane. This guarantees that the correct namespace labels are applied and enables the PyTorchJob CRD to be utilized in Katib.

Load Model and Dataset

To optimize hyperparameters of a pre-trained model, it is essential to load the model and dataset from a provider. Currently, this can be done using external platforms like HuggingFace and S3-compatible object storage (e.g., Amazon S3) through the storage_initializer API from Kubeflow Training Operator.

HuggingFace Integration

The HuggingFace provider enables seamless integration of models and datasets for training and evaluation. You can import the necessary components for Hugging Face using the following code:

from kubeflow.storage_initializer.hugging_face import (
    HuggingFaceModelParams,
    HuggingFaceDatasetParams,
    HuggingFaceTrainerParams,
)

HuggingFaceModelParams

Description

The HuggingFaceModelParams dataclass holds configuration parameters for initializing Hugging Face models with validation checks.

AttributeTypeDescription
model_uristrURI or path to the Hugging Face model (must not be empty).
transformer_typeTRANSFORMER_TYPESSpecifies the model type for various NLP/ML tasks.
access_tokenOptional[str] (default: None)Token for accessing private models on Hugging Face.
num_labelsOptional[int] (default: None)Number of output labels (used for classification tasks).
Supported Transformer Types (TRANSFORMER_TYPES)
Model TypeTask
AutoModelForSequenceClassificationText classification
AutoModelForTokenClassificationNamed entity recognition
AutoModelForQuestionAnsweringQuestion answering
AutoModelForCausalLMText generation (causal)
AutoModelForMaskedLMMasked language modeling
AutoModelForImageClassificationImage classification
Example Usage
from transformers import AutoModelForSequenceClassification

from kubeflow.storage_initializer.hugging_face import HuggingFaceModelParams


params = HuggingFaceModelParams(
    model_uri="bert-base-uncased",
    transformer_type=AutoModelForSequenceClassification,
    access_token="huggingface_access_token",
    num_labels=2  # For binary classification
)

HuggingFaceDatasetParams

Description

The HuggingFaceDatasetParams class holds configuration parameters for loading datasets from Hugging Face with validation checks.

AttributeTypeDescription
repo_idstrIdentifier of the dataset repository on Hugging Face (must not be empty).
access_tokenOptional[str] (default: None)Token for accessing private datasets on Hugging Face.
splitOptional[str] (default: None)Dataset split to load (e.g., "train", "test").
Example Usage
from kubeflow.storage_initializer.hugging_face import HuggingFaceDatasetParams


dataset_params = HuggingFaceDatasetParams(
    repo_id="imdb",            # Public dataset repository ID on Hugging Face
    split="train",             # Dataset split to load
    access_token=None          # Not needed for public datasets
)

HuggingFaceTrainerParams

Description

The HuggingFaceTrainerParams class is used to define parameters for the training process in the Hugging Face framework. It includes the training arguments and LoRA configuration to optimize model training.

ParameterTypeDescription
training_parameterstransformers.TrainingArgumentsContains the training arguments like learning rate, epochs, batch size, etc.
lora_configLoraConfigLoRA configuration to reduce the number of trainable parameters in the model.
Katib Search API for Defining Hyperparameter Search Space

The Katib Search API allows users to define the search space for hyperparameters during model tuning. This API supports continuous, discrete, and categorical parameter sampling, enabling flexible and efficient hyperparameter optimization.

Below are the available methods for defining hyperparameter search spaces:

FunctionDescriptionParameter TypeArguments
double()Samples a continuous float value within a specified range.doublemin (float, required), max (float, required), step (float, optional)
int()Samples an integer value within a specified range.intmin (int, required), max (int, required), step (int, optional)
categorical()Samples a value from a predefined list of categories.categoricallist (List, required)
Example Usage

This is an example of how to use the HuggingFaceTrainerParams class to define the training and LoRA parameters.

import kubeflow.katib as katib
from kubeflow.storage_initializer.hugging_face import HuggingFaceTrainerParams

from transformers import TrainingArguments
from peft import LoraConfig

# Set up training and LoRA configuration
trainer_params = HuggingFaceTrainerParams(
    training_parameters=TrainingArguments(
        output_dir="results",
        # Using katib search api to define a search space for the parameter
        learning_rate=katib.search.double(min=1e-05, max=5e-05),
        num_train_epochs=3,
        per_device_train_batch_size=8,
    ),
    lora_config=LoraConfig(
        r=katib.search.int(min=8, max=32),
        lora_alpha=16,
        lora_dropout=0.1,
        bias="none",
    ),
)

S3-Compatible Object Storage Integration

In addition to Hugging Face, you can integrate with S3-compatible object storage platforms to load datasets. To work with S3, use the S3DatasetParams class to define your dataset parameters.

from kubeflow.storage_initializer.s3 import S3DatasetParams

S3DatasetParams

Description

The S3DatasetParams class is used for loading datasets from S3-compatible object storage. The parameters are defined as follows:

ParameterTypeDescription
endpoint_urlstrURL of the S3-compatible storage service.
bucket_namestrName of the S3 bucket containing the dataset.
file_keystrKey (path) to the dataset file within the bucket.
region_namestr, optionalThe AWS region of the S3 bucket (optional).
access_keystr, optionalThe access key for authentication with S3 (optional).
secret_keystr, optionalThe secret key for authentication with S3 (optional).
Example Usage
from kubeflow.storage_initializer.s3 import S3DatasetParams


s3_params = S3DatasetParams(
    endpoint_url="https://s3.amazonaws.com",
    bucket_name="my-dataset-bucket",
    file_key="datasets/train.csv",
    region_name="us-west-2",
    access_key="YOUR_ACCESS_KEY",
    secret_key="YOUR_SECRET_KEY"
)

Optimizing Hyperparameters of Large Language Models

In the context of optimizing hyperparameters of large language models (LLMs) like GPT, BERT, or similar transformer-based models, it is crucial to optimize various hyperparameters to improve model performance. This sub-section covers the key parameters used in tuning LLMs via a tune function, specifically using tools like Katib for automated hyperparameter optimization in Kubernetes environments.

Key Parameters for LLM Hyperparameter Tuning

ParameterDescriptionRequired
nameName of the experiment.Required
model_provider_parametersParameters for the model provider, such as model type and configuration.Optional
dataset_provider_parametersParameters for the dataset provider, such as dataset configuration.Optional
trainer_parametersConfiguration for the trainer, including hyperparameters for model training.Optional
storage_configConfiguration for storage, like PVC size and storage class.Optional
objectiveObjective function for training and optimization.Optional
base_imageBase image for executing the objective function.Optional
parametersHyperparameters for tuning the experiment.Optional
namespaceKubernetes namespace for the experiment.Optional
env_per_trialEnvironment variables for each trial.Optional
algorithm_nameAlgorithm used for the hyperparameter search.Optional
algorithm_settingsSettings for the search algorithm.Optional
objective_metric_nameName of the objective metric for optimization.Optional
additional_metric_namesList of additional metrics to collect from the objective function.Optional
objective_typeType of optimization for the objective metric (minimize or maximize).Optional
objective_goalThe target value for the objective to succeed.Optional
max_trial_countMaximum number of trials to run.Optional
parallel_trial_countNumber of trials to run in parallel.Optional
max_failed_trial_countMaximum number of failed trials allowed.Optional
resources_per_trialResource requirements per trial, including CPU, memory, and GPU.Optional
retain_trialsWhether to retain resources from completed trials.Optional
packages_to_installList of additional Python packages to install.Optional
pip_index_urlThe PyPI URL from which to install Python packages.Optional
metrics_collector_configConfiguration for the metrics collector.Optional
  1. Supported Objective Metric for LLMs
    Currently, for large language model (LLM) hyperparameter optimization, only train_loss is supported as the objective metric. This is because train_loss is the default metric produced by the trainer.train() function in Hugging Face, which our trainer utilizes. We plan to expand support for additional metrics in future updates.

  2. Configuring resources_per_trial
    When importing models and datasets from external platforms, you are required to define resources_per_trial using the TrainerResources object. Setting num_workers to a value greater than 1 enables distributed training through PyTorchJob. To disable distributed training, simply set num_workers=1.

    Example Configuration:

      import kubeflow.katib as katib
    
      resources_per_trial=katib.TrainerResources(
         num_workers=1,
         num_procs_per_worker=1,
         resources_per_worker={"gpu": 0, "cpu": 1, "memory": "10G",},
      )
    
  3. Defining a Custom Objective Function
    In addition to importing models and datasets from external platforms using model_provider_parameters, dataset_provider_parameters, and trainer_parameters, users also have the flexibility to define a custom objective function, along with a custom image and parameters for hyperparameter optimization.

    For detailed instructions, refer to this guide.

Example: Optimizing Hyperparameters of Llama-3.2 for Binary Classification on IMDB Dataset

This code provides an example of hyperparameter optimized Llama-3.2 model for a binary classification task on the IMDB movie reviews dataset. The Llama-3.2 model is fine-tuned using LoRA (Low-Rank Adaptation) to reduce the number of trainable parameters. The dataset used in this example consists of 1000 movie reviews from the IMDB dataset, and the training process is optimized through Katib to find the best hyperparameters.

Katib Configuration

The following table outlines the Katib configuration used for hyperparameter optimization process:

ParameterDescription
exp_nameName of the experiment (llama).
model_provider_parametersParameters for the Hugging Face model (Llama-3.2).
dataset_provider_parametersParameters for the IMDB dataset (1000 movie reviews).
trainer_parametersParameters for the Hugging Face trainer, including LoRA settings.
objective_metric_nameThe objective metric to minimize, in this case, "train_loss".
objective_typeType of optimization: "minimize" for training loss.
algorithm_nameThe optimization algorithm used, set to "random" for random search.
max_trial_countMaximum number of trials to run, set to 10.
parallel_trial_countNumber of trials to run in parallel, set to 2.
resources_per_trialResources allocated for each trial: 2 GPUs, 4 CPUs, 10GB memory.

Code Example

import kubeflow.katib as katib
from kubeflow.katib import KatibClient

from transformers import AutoModelForSequenceClassification, TrainingArguments
from peft import LoraConfig

from kubeflow.storage_initializer.hugging_face import (
	HuggingFaceModelParams,
	HuggingFaceDatasetParams,
	HuggingFaceTrainerParams,
)

hf_model = HuggingFaceModelParams(
    model_uri = "hf://meta-llama/Llama-3.2-1B",
    transformer_type = AutoModelForSequenceClassification,
)

# Train the model on 1000 movie reviews from imdb
# https://huggingface.co/datasets/stanfordnlp/imdb
hf_dataset = HuggingFaceDatasetParams(
    repo_id = "imdb",
    split = "train[:1000]",
)

hf_tuning_parameters = HuggingFaceTrainerParams(
    training_parameters = TrainingArguments(
        output_dir = "results",
        save_strategy = "no",
        learning_rate = katib.search.double(min=1e-05, max=5e-05),
        num_train_epochs=3,
    ),
    # Set LoRA config to reduce number of trainable model parameters.
    lora_config = LoraConfig(
        r = katib.search.int(min=8, max=32),
        lora_alpha = 8,
        lora_dropout = 0.1,
        bias = "none",
    ),
)

cl = KatibClient(namespace="kubeflow")

# Optimizing Hyperparameters for Binary Classification
exp_name = "llama"
cl.tune(
	name = exp_name,
	model_provider_parameters = hf_model,
	dataset_provider_parameters = hf_dataset,
	trainer_parameters = hf_tuning_parameters,
	objective_metric_name = "train_loss",
	objective_type = "minimize",
	algorithm_name = "random",
	max_trial_count = 10,
	parallel_trial_count = 2,
	resources_per_trial={
		"gpu": "2",
		"cpu": "4",
		"memory": "10G",
	},
)

cl.wait_for_experiment_condition(name=exp_name)

# Get the best hyperparameters.
print(cl.get_optimal_hyperparameters(exp_name))

Feedback

Was this page helpful?