

<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->

------------------------------------------------------------------------

<a
href="https://github.com/mustafaslanCoto/peshbeen/blob/main/peshbeen/models/ml_mv_forecaster.py#L24"
target="_blank" style="float:right; font-size:smaller">source</a>

### ml_mv_forecaster

``` python

def ml_mv_forecaster(
    model:Any, # A scikit-learn compatible regression model instance (e.g. LGBMRegressor(), CatBoostRegressor(), LinearRegression(), etc.).
    target_cols:List[str], # List of target variable names to forecast.
    lags:Optional[Dict[str, Union[int, List[int]]]]=None, # Dictionary specifying lag features to create for each target variable. The value can be an integer (number of lags) or a list of specific lag periods.
    lag_transform:Optional[Dict[str, list]]=None, # Dictionary specifying lag-based transformations to apply for each target variable. The value should be a list of transformation functions (e.g. rolling_mean, expanding_std) with their parameters encapsulated in the function instance.
    difference:Optional[Dict[str, int]]=None, # Dictionary specifying the order of ordinary differencing to apply for each target variable.
    seasonal_diff:Optional[Dict[str, int]]=None, # Dictionary specifying the order of seasonal differencing to apply for each target variable.
    trend:Optional[Dict[str, str]]=None, # Dictionary specifying the trend removal strategy for each target variable. Supported values are 'linear', 'ets', 'feature_lr', and 'feature_ets'.
    pol_degree:Optional[Union[int, Dict[str, int]]]=1, # Polynomial degree for linear trend removal. Can be a single integer applied to all targets or a dictionary specifying the degree for each target variable.
    ets_params:Optional[Dict[str, Any]]=None, # Dictionary specifying ETS model and fit parameters for each target variable when using 'ets' trend removal. Each value is a dictionary of parameters for the ExponentialSmoothing model and fitting process.
    change_points:Optional[Dict[str, List[int]]]=None, # Dictionary specifying change points for piecewise linear trend removal for each target variable. The value should be a list of integer indices where the trend slope can change.
    box_cox:Optional[Dict[str, Union[bool, float, int]]]=None, # Dictionary specifying whether to apply Box-Cox transformation for each target variable. The value can be a boolean (True to apply with lambda estimated from data, False to skip) or a float (specific lambda value to use).
    box_cox_biasadj:Optional[Dict[str, bool]]=None, # Dictionary specifying whether to apply bias adjustment when inverting Box-Cox transformation for each target variable.
    cat_variables:Optional[List[str]]=None, # List of categorical feature column names to encode. These will be shared across all target variables.
    categorical_encoder:Optional[Union[Dict[str, Any], Any]]=None, # A categorical encoder instance, or a single-entry dictionary mapping the target column to the encoder when the encoder requires access to the target variable during fitting (e.g. {target_col: MeanEncoder()}). If encoder requiring target access is provided directly without the dict format, first target column in target_cols will be used for fitting the encoder. For encoders that do not require target access, pass the encoder instance directly (e.g. OneHotEncoder()).
)->None:

```

*“* Initialize the multi-target machine learning forecaster with
specified transformations and model.

<table>
<colgroup>
<col style="width: 6%" />
<col style="width: 25%" />
<col style="width: 34%" />
<col style="width: 34%" />
</colgroup>
<thead>
<tr>
<th></th>
<th><strong>Type</strong></th>
<th><strong>Default</strong></th>
<th><strong>Details</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>model</td>
<td>Any</td>
<td></td>
<td>A scikit-learn compatible regression model instance
(e.g. LGBMRegressor(), CatBoostRegressor(), LinearRegression(),
etc.).</td>
</tr>
<tr>
<td>target_cols</td>
<td>List[str]</td>
<td></td>
<td>List of target variable names to forecast.</td>
</tr>
<tr>
<td>lags</td>
<td>Optional[Dict[str, Union[int, List[int]]]]</td>
<td>None</td>
<td>Dictionary specifying lag features to create for each target
variable. The value can be an integer (number of lags) or a list of
specific lag periods.</td>
</tr>
<tr>
<td>lag_transform</td>
<td>Optional[Dict[str, list]]</td>
<td>None</td>
<td>Dictionary specifying lag-based transformations to apply for each
target variable. The value should be a list of transformation functions
(e.g. rolling_mean, expanding_std) with their parameters encapsulated in
the function instance.</td>
</tr>
<tr>
<td>difference</td>
<td>Optional[Dict[str, int]]</td>
<td>None</td>
<td>Dictionary specifying the order of ordinary differencing to apply
for each target variable.</td>
</tr>
<tr>
<td>seasonal_diff</td>
<td>Optional[Dict[str, int]]</td>
<td>None</td>
<td>Dictionary specifying the order of seasonal differencing to apply
for each target variable.</td>
</tr>
<tr>
<td>trend</td>
<td>Optional[Dict[str, str]]</td>
<td>None</td>
<td>Dictionary specifying the trend removal strategy for each target
variable. Supported values are ‘linear’, ‘ets’, ‘feature_lr’, and
‘feature_ets’.</td>
</tr>
<tr>
<td>pol_degree</td>
<td>Optional[Union[int, Dict[str, int]]]</td>
<td>1</td>
<td>Polynomial degree for linear trend removal. Can be a single integer
applied to all targets or a dictionary specifying the degree for each
target variable.</td>
</tr>
<tr>
<td>ets_params</td>
<td>Optional[Dict[str, Any]]</td>
<td>None</td>
<td>Dictionary specifying ETS model and fit parameters for each target
variable when using ‘ets’ trend removal. Each value is a dictionary of
parameters for the ExponentialSmoothing model and fitting process.</td>
</tr>
<tr>
<td>change_points</td>
<td>Optional[Dict[str, List[int]]]</td>
<td>None</td>
<td>Dictionary specifying change points for piecewise linear trend
removal for each target variable. The value should be a list of integer
indices where the trend slope can change.</td>
</tr>
<tr>
<td>box_cox</td>
<td>Optional[Dict[str, Union[bool, float, int]]]</td>
<td>None</td>
<td>Dictionary specifying whether to apply Box-Cox transformation for
each target variable. The value can be a boolean (True to apply with
lambda estimated from data, False to skip) or a float (specific lambda
value to use).</td>
</tr>
<tr>
<td>box_cox_biasadj</td>
<td>Optional[Dict[str, bool]]</td>
<td>None</td>
<td>Dictionary specifying whether to apply bias adjustment when
inverting Box-Cox transformation for each target variable.</td>
</tr>
<tr>
<td>cat_variables</td>
<td>Optional[List[str]]</td>
<td>None</td>
<td>List of categorical feature column names to encode. These will be
shared across all target variables.</td>
</tr>
<tr>
<td>categorical_encoder</td>
<td>Optional[Union[Dict[str, Any], Any]]</td>
<td>None</td>
<td>A categorical encoder instance, or a single-entry dictionary mapping
the target column to the encoder when the encoder requires access to the
target variable during fitting (e.g. {target_col: MeanEncoder()}). If
encoder requiring target access is provided directly without the dict
format, first target column in target_cols will be used for fitting the
encoder. For encoders that do not require target access, pass the
encoder instance directly (e.g. OneHotEncoder()).</td>
</tr>
<tr>
<td><strong>Returns</strong></td>
<td><strong>None</strong></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>

------------------------------------------------------------------------

<a
href="https://github.com/mustafaslanCoto/peshbeen/blob/main/peshbeen/models/ml_mv_forecaster.py#L378"
target="_blank" style="float:right; font-size:smaller">source</a>

### ml_mv_forecaster.fit

``` python

def fit(
    df:pd.DataFrame, # Training DataFrame containing all target and feature columns.
)->None:

```

*Fit the model to the data passed in df*

<table>
<colgroup>
<col style="width: 9%" />
<col style="width: 38%" />
<col style="width: 52%" />
</colgroup>
<thead>
<tr>
<th></th>
<th><strong>Type</strong></th>
<th><strong>Details</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>df</td>
<td>pd.DataFrame</td>
<td>Training DataFrame containing all target and feature columns.</td>
</tr>
<tr>
<td><strong>Returns</strong></td>
<td><strong>None</strong></td>
<td></td>
</tr>
</tbody>
</table>

------------------------------------------------------------------------

<a
href="https://github.com/mustafaslanCoto/peshbeen/blob/main/peshbeen/models/ml_mv_forecaster.py#L449"
target="_blank" style="float:right; font-size:smaller">source</a>

### ml_mv_forecaster.forecast

``` python

def forecast(
    H:int, # Forecast horizon (number of steps to forecast ahead).
    exog:Optional[pd.DataFrame]=None, # Future exogenous regressors (must contain at least H rows).
)->Dict[str, np.ndarray]: # A dictionary where keys are target column names and values are arrays of H forecasted values for each target variable.

```

*Generate forecasts for H future time steps.*

<table>
<colgroup>
<col style="width: 6%" />
<col style="width: 25%" />
<col style="width: 34%" />
<col style="width: 34%" />
</colgroup>
<thead>
<tr>
<th></th>
<th><strong>Type</strong></th>
<th><strong>Default</strong></th>
<th><strong>Details</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>H</td>
<td>int</td>
<td></td>
<td>Forecast horizon (number of steps to forecast ahead).</td>
</tr>
<tr>
<td>exog</td>
<td>Optional[pd.DataFrame]</td>
<td>None</td>
<td>Future exogenous regressors (must contain at least H rows).</td>
</tr>
<tr>
<td><strong>Returns</strong></td>
<td><strong>Dict[str, np.ndarray]</strong></td>
<td></td>
<td><strong>A dictionary where keys are target column names and values
are arrays of H forecasted values for each target
variable.</strong></td>
</tr>
</tbody>
</table>

------------------------------------------------------------------------

<a
href="https://github.com/mustafaslanCoto/peshbeen/blob/main/peshbeen/models/ml_mv_forecaster.py#L625"
target="_blank" style="float:right; font-size:smaller">source</a>

### ml_mv_forecaster.cross_validate

``` python

def cross_validate(
    df:pd.DataFrame, # Input dataframe.
    target_col:str, # Target variable for evaluation.
    cv_split:int, # Number of cross-validation folds.
    test_size:int, # Test size per fold.
    metrics:List[Callable], # Metric functions (e.g. ``[MAE, RMSE]``) used to evaluate forecast accuracy across folds. Call ``.cv_summary()`` after cross-validation to retrieve the aggregated scores.
    step_size:int=1, # Step size for rolling window. Default is 1.
    h_split_point:Optional[int]=None, # Point to split the test set for separate evaluation. Default is None.
)->Union[pd.DataFrame, Tuple[pd.DataFrame, pd.DataFrame]]: # DataFrame with overall performance metrics averaged across folds. If h_split_point is provided, also includes separate performance before and after the split point.

```

*Perform cross-validation.*

<table>
<colgroup>
<col style="width: 6%" />
<col style="width: 25%" />
<col style="width: 34%" />
<col style="width: 34%" />
</colgroup>
<thead>
<tr>
<th></th>
<th><strong>Type</strong></th>
<th><strong>Default</strong></th>
<th><strong>Details</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>df</td>
<td>pd.DataFrame</td>
<td></td>
<td>Input dataframe.</td>
</tr>
<tr>
<td>target_col</td>
<td>str</td>
<td></td>
<td>Target variable for evaluation.</td>
</tr>
<tr>
<td>cv_split</td>
<td>int</td>
<td></td>
<td>Number of cross-validation folds.</td>
</tr>
<tr>
<td>test_size</td>
<td>int</td>
<td></td>
<td>Test size per fold.</td>
</tr>
<tr>
<td>metrics</td>
<td>List[Callable]</td>
<td></td>
<td>Metric functions (e.g. <code>[MAE, RMSE]</code>) used to evaluate
forecast accuracy across folds. Call <code>.cv_summary()</code> after
cross-validation to retrieve the aggregated scores.</td>
</tr>
<tr>
<td>step_size</td>
<td>int</td>
<td>1</td>
<td>Step size for rolling window. Default is 1.</td>
</tr>
<tr>
<td>h_split_point</td>
<td>Optional[int]</td>
<td>None</td>
<td>Point to split the test set for separate evaluation. Default is
None.</td>
</tr>
<tr>
<td><strong>Returns</strong></td>
<td><strong>Union[pd.DataFrame, Tuple[pd.DataFrame,
pd.DataFrame]]</strong></td>
<td></td>
<td><strong>DataFrame with overall performance metrics averaged across
folds. If h_split_point is provided, also includes separate performance
before and after the split point.</strong></td>
</tr>
</tbody>
</table>
