

<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->

------------------------------------------------------------------------

<a
href="https://github.com/mustafaslanCoto/peshbeen/blob/main/peshbeen/models/var.py#L22"
target="_blank" style="float:right; font-size:smaller">source</a>

### var

``` python

def var(
    target_cols:List[str], # List of target column names to model.
    lags:Dict[str, Union[int, List[int]]], # Dictionary specifying lags for each target variable. Values can be an int (number of lags) or a list of specific lag indices.
    lag_transform:Optional[Dict[str, list]]=None, # Dictionary specifying lag-transform functions for each target variable. Each value is a list of transformation functions (e.g., rolling_mean, expanding_std) to apply to the lagged features of that target.
    difference:Optional[Dict[str, int]]=None, # Dictionary specifying the order of ordinary differencing to apply to each target variable. Values are integers indicating how many times to difference the series.
    seasonal_diff:Optional[Dict[str, int]]=None, # Dictionary specifying the seasonal period for seasonal differencing for each target variable. Values are integers indicating the seasonal lag (e.g., 12 for monthly data with yearly seasonality).
    trend:Optional[Dict[str, str]]=None, # Dictionary specifying the trend strategy for each target variable. Values can be 'linear' for linear trend removal or 'ets' for ETS-based trend removal.
    pol_degree:Optional[Union[int, Dict[str, int]]]=1, # Polynomial degree for linear trend removal. Can be a single integer applied to all targets or a dictionary specifying the degree for each target.
    ets_params:Optional[Dict[str, Any]]=None, # Dictionary specifying ETS model and fit parameters for each target variable when using 'ets' trend removal. Each value is a dictionary of parameters for the ExponentialSmoothing model and fitting process.
    change_points:Optional[Dict[str, List[int]]]=None, # Dictionary specifying change points for piecewise linear trend removal for each target variable. Values are lists of integer indices indicating where the trend should change. Only applicable when trend strategy is 'linear'.
    box_cox:Optional[Dict[str, Union[bool, float, int]]]=None, # Dictionary specifying whether to apply Box-Cox transformation to each target variable. Values can be a boolean (True to apply, False to skip) or a float (lambda parameter for Box-Cox transformation). If True, lambda will be estimated from the data.
    box_cox_biasadj:Union[bool, Dict[str, bool]]=False, # Whether to apply bias adjustment when inverting the Box-Cox transformation on forecasts. Can be a single boolean applied to all targets or a dictionary specifying the bias adjustment for each target.
    add_constant:bool=True, # If True, a constant column will be added to the regressor matrix for the VAR model. This is typically used to allow for an intercept in the model.
    cat_variables:Optional[List[str]]=None, # List of categorical feature column names to encode. These will be shared across all target variables.
    categorical_encoder:Optional[Union[Dict[str, Any], Any]]=None, # A categorical encoder instance, or a single-entry dictionary mapping the target column to the encoder when the encoder requires access to the target variable during fitting (e.g. {target_col: MeanEncoder()}). If encoder requiring target access is provided directly without the dict format, first target column in target_cols will be used for fitting the encoder. For encoders that do not require target access, pass the encoder instance directly (e.g. OneHotEncoder()).
    verbose:bool=False, # If True, the model will print verbose messages.
)->None:

```

*“* Initialize the VAR model with specified preprocessing and modeling
parameters.

<table>
<colgroup>
<col style="width: 6%" />
<col style="width: 25%" />
<col style="width: 34%" />
<col style="width: 34%" />
</colgroup>
<thead>
<tr>
<th></th>
<th><strong>Type</strong></th>
<th><strong>Default</strong></th>
<th><strong>Details</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>target_cols</td>
<td>List[str]</td>
<td></td>
<td>List of target column names to model.</td>
</tr>
<tr>
<td>lags</td>
<td>Dict[str, Union[int, List[int]]]</td>
<td></td>
<td>Dictionary specifying lags for each target variable. Values can be
an int (number of lags) or a list of specific lag indices.</td>
</tr>
<tr>
<td>lag_transform</td>
<td>Optional[Dict[str, list]]</td>
<td>None</td>
<td>Dictionary specifying lag-transform functions for each target
variable. Each value is a list of transformation functions (e.g.,
rolling_mean, expanding_std) to apply to the lagged features of that
target.</td>
</tr>
<tr>
<td>difference</td>
<td>Optional[Dict[str, int]]</td>
<td>None</td>
<td>Dictionary specifying the order of ordinary differencing to apply to
each target variable. Values are integers indicating how many times to
difference the series.</td>
</tr>
<tr>
<td>seasonal_diff</td>
<td>Optional[Dict[str, int]]</td>
<td>None</td>
<td>Dictionary specifying the seasonal period for seasonal differencing
for each target variable. Values are integers indicating the seasonal
lag (e.g., 12 for monthly data with yearly seasonality).</td>
</tr>
<tr>
<td>trend</td>
<td>Optional[Dict[str, str]]</td>
<td>None</td>
<td>Dictionary specifying the trend strategy for each target variable.
Values can be ‘linear’ for linear trend removal or ‘ets’ for ETS-based
trend removal.</td>
</tr>
<tr>
<td>pol_degree</td>
<td>Optional[Union[int, Dict[str, int]]]</td>
<td>1</td>
<td>Polynomial degree for linear trend removal. Can be a single integer
applied to all targets or a dictionary specifying the degree for each
target.</td>
</tr>
<tr>
<td>ets_params</td>
<td>Optional[Dict[str, Any]]</td>
<td>None</td>
<td>Dictionary specifying ETS model and fit parameters for each target
variable when using ‘ets’ trend removal. Each value is a dictionary of
parameters for the ExponentialSmoothing model and fitting process.</td>
</tr>
<tr>
<td>change_points</td>
<td>Optional[Dict[str, List[int]]]</td>
<td>None</td>
<td>Dictionary specifying change points for piecewise linear trend
removal for each target variable. Values are lists of integer indices
indicating where the trend should change. Only applicable when trend
strategy is ‘linear’.</td>
</tr>
<tr>
<td>box_cox</td>
<td>Optional[Dict[str, Union[bool, float, int]]]</td>
<td>None</td>
<td>Dictionary specifying whether to apply Box-Cox transformation to
each target variable. Values can be a boolean (True to apply, False to
skip) or a float (lambda parameter for Box-Cox transformation). If True,
lambda will be estimated from the data.</td>
</tr>
<tr>
<td>box_cox_biasadj</td>
<td>Union[bool, Dict[str, bool]]</td>
<td>False</td>
<td>Whether to apply bias adjustment when inverting the Box-Cox
transformation on forecasts. Can be a single boolean applied to all
targets or a dictionary specifying the bias adjustment for each
target.</td>
</tr>
<tr>
<td>add_constant</td>
<td>bool</td>
<td>True</td>
<td>If True, a constant column will be added to the regressor matrix for
the VAR model. This is typically used to allow for an intercept in the
model.</td>
</tr>
<tr>
<td>cat_variables</td>
<td>Optional[List[str]]</td>
<td>None</td>
<td>List of categorical feature column names to encode. These will be
shared across all target variables.</td>
</tr>
<tr>
<td>categorical_encoder</td>
<td>Optional[Union[Dict[str, Any], Any]]</td>
<td>None</td>
<td>A categorical encoder instance, or a single-entry dictionary mapping
the target column to the encoder when the encoder requires access to the
target variable during fitting (e.g. {target_col: MeanEncoder()}). If
encoder requiring target access is provided directly without the dict
format, first target column in target_cols will be used for fitting the
encoder. For encoders that do not require target access, pass the
encoder instance directly (e.g. OneHotEncoder()).</td>
</tr>
<tr>
<td>verbose</td>
<td>bool</td>
<td>False</td>
<td>If True, the model will print verbose messages.</td>
</tr>
<tr>
<td><strong>Returns</strong></td>
<td><strong>None</strong></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>

------------------------------------------------------------------------

<a
href="https://github.com/mustafaslanCoto/peshbeen/blob/main/peshbeen/models/var.py#L351"
target="_blank" style="float:right; font-size:smaller">source</a>

### var.fit

``` python

def fit(
    df:pd.DataFrame, # Training DataFrame containing the target and any feature columns.
)->None:

```

*Fit the VAR model to the provided DataFrame.*

<table>
<colgroup>
<col style="width: 9%" />
<col style="width: 38%" />
<col style="width: 52%" />
</colgroup>
<thead>
<tr>
<th></th>
<th><strong>Type</strong></th>
<th><strong>Details</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>df</td>
<td>pd.DataFrame</td>
<td>Training DataFrame containing the target and any feature
columns.</td>
</tr>
<tr>
<td><strong>Returns</strong></td>
<td><strong>None</strong></td>
<td></td>
</tr>
</tbody>
</table>

------------------------------------------------------------------------

<a
href="https://github.com/mustafaslanCoto/peshbeen/blob/main/peshbeen/models/var.py#L467"
target="_blank" style="float:right; font-size:smaller">source</a>

### var.forecast

``` python

def forecast(
    H:int, # Forecast horizon (number of steps ahead to predict).
    exog:Optional[pd.DataFrame]=None, # Future exogenous regressors (must contain at least H rows).
)->Dict[str, np.ndarray]: # Forecasted values for each target, keyed by column name.

```

*Generate forecasts for H future time steps.*

<table>
<colgroup>
<col style="width: 6%" />
<col style="width: 25%" />
<col style="width: 34%" />
<col style="width: 34%" />
</colgroup>
<thead>
<tr>
<th></th>
<th><strong>Type</strong></th>
<th><strong>Default</strong></th>
<th><strong>Details</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>H</td>
<td>int</td>
<td></td>
<td>Forecast horizon (number of steps ahead to predict).</td>
</tr>
<tr>
<td>exog</td>
<td>Optional[pd.DataFrame]</td>
<td>None</td>
<td>Future exogenous regressors (must contain at least H rows).</td>
</tr>
<tr>
<td><strong>Returns</strong></td>
<td><strong>Dict[str, np.ndarray]</strong></td>
<td></td>
<td><strong>Forecasted values for each target, keyed by column
name.</strong></td>
</tr>
</tbody>
</table>

------------------------------------------------------------------------

<a
href="https://github.com/mustafaslanCoto/peshbeen/blob/main/peshbeen/models/var.py#L580"
target="_blank" style="float:right; font-size:smaller">source</a>

### var.cross_validate

``` python

def cross_validate(
    df:pd.DataFrame, # Input dataframe.
    target_col:str, # Target variable for evaluation.
    cv_split:int, # Number of cross-validation folds.
    test_size:int, # Test size per fold.
    metrics:List[Callable], # Metric functions (e.g. ``[MAE, RMSE]``) used to evaluate forecast accuracy across folds. Call ``.cv_summary()`` after cross-validation to retrieve the aggregated scores.
    step_size:int=1, # Step size for rolling window. Default is 1.
    h_split_point:Optional[int]=None, # Point to split the test set for separate evaluation. Default is None.
)->Union[pd.DataFrame, Tuple[pd.DataFrame, pd.DataFrame]]: # DataFrame with averaged cross-validation metric scores.

```

*Perform cross-validation.*

<table>
<colgroup>
<col style="width: 6%" />
<col style="width: 25%" />
<col style="width: 34%" />
<col style="width: 34%" />
</colgroup>
<thead>
<tr>
<th></th>
<th><strong>Type</strong></th>
<th><strong>Default</strong></th>
<th><strong>Details</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>df</td>
<td>pd.DataFrame</td>
<td></td>
<td>Input dataframe.</td>
</tr>
<tr>
<td>target_col</td>
<td>str</td>
<td></td>
<td>Target variable for evaluation.</td>
</tr>
<tr>
<td>cv_split</td>
<td>int</td>
<td></td>
<td>Number of cross-validation folds.</td>
</tr>
<tr>
<td>test_size</td>
<td>int</td>
<td></td>
<td>Test size per fold.</td>
</tr>
<tr>
<td>metrics</td>
<td>List[Callable]</td>
<td></td>
<td>Metric functions (e.g. <code>[MAE, RMSE]</code>) used to evaluate
forecast accuracy across folds. Call <code>.cv_summary()</code> after
cross-validation to retrieve the aggregated scores.</td>
</tr>
<tr>
<td>step_size</td>
<td>int</td>
<td>1</td>
<td>Step size for rolling window. Default is 1.</td>
</tr>
<tr>
<td>h_split_point</td>
<td>Optional[int]</td>
<td>None</td>
<td>Point to split the test set for separate evaluation. Default is
None.</td>
</tr>
<tr>
<td><strong>Returns</strong></td>
<td><strong>Union[pd.DataFrame, Tuple[pd.DataFrame,
pd.DataFrame]]</strong></td>
<td></td>
<td><strong>DataFrame with averaged cross-validation metric
scores.</strong></td>
</tr>
</tbody>
</table>
