source

pesh


def pesh(
    models:dict, # A dictionary of model instances to be used for forecasting. The keys should be string names for each model.
    weighting_scheme:Optional[Dict[str, float]]=None, # Optional dictionary specifying weights for each model's forecast. Default is None, which means equal weighting.
)->None:

Initialize the pesh model with the specified parameters for hybrid forecasting that combines forecasts from multiple models.

Type Default Details
models dict A dictionary of model instances to be used for forecasting. The keys should be string names for each model.
weighting_scheme Optional[Dict[str, float]] None Optional dictionary specifying weights for each model’s forecast. Default is None, which means equal weighting.
Returns None

source

pesh.fit


def fit(
    df:pd.DataFrame, # Training DataFrame containing the target and any feature columns.
)->None:

Fit the specified models to the training data.

Type Details
df pd.DataFrame Training DataFrame containing the target and any feature columns.
Returns None

source

pesh.forecast


def forecast(
    H:int, # Forecast horizon.
    exog:Optional[pd.DataFrame]=None, # Optional dataframe of future regressors. Must have the same columns as the exogenous variables used during training and at least `H` rows.
)->np.ndarray: # Forecast values of length `H`.

Recursive multi-step forecast.

Type Default Details
H int Forecast horizon.
exog Optional[pd.DataFrame] None Optional dataframe of future regressors. Must have the same columns as the exogenous variables used during training and at least H rows.
Returns np.ndarray Forecast values of length H.

source

pesh.cross_validate


def cross_validate(
    df:pd.DataFrame, # The input DataFrame containing the target and any feature columns.
    cv_split:int, # The number of cross-validation splits.
    test_size:int, # The size of the test set for each split.
    metrics:List[Callable], # Metric functions (e.g. ``[MAE, RMSE]``) used to evaluate forecast accuracy across folds. Call ``.cv_summary()`` after cross-validation to retrieve the aggregated scores.
    step_size:int=1, # The step size for rolling the forecasting origin.
    metric_to_opt:Optional[Callable]=None, # An optional metric function to optimize when weighting_scheme is set to "optimize". If None, it defaults to the first metric in the metrics list.
    weighting_scheme:Optional[Union[Dict[str, float], str]]=None, # None: equal weights across models. dict: user-provided weights (must sum to 1). "optimize": optimize weights to minimize MSE via `scipy.optimize.minimize`.
    optimizer:str='SLSQP', # Optimization method to use when weighting_scheme is set to "optimize". Passed to `scipy.optimize.minimize`. Refer to SciPy documentation for available methods.
)->pd.DataFrame: # A DataFrame containing the performance metrics for each model and the combined forecast across all cross-validation splits. Also, optimized weights are stored in `self.optimal_weights_` if `weighting_scheme` is "optimize".

Perform cross-validation for the pesh model using a rolling forecasting origin approach.

Type Default Details
df pd.DataFrame The input DataFrame containing the target and any feature columns.
cv_split int The number of cross-validation splits.
test_size int The size of the test set for each split.
metrics List[Callable] Metric functions (e.g. [MAE, RMSE]) used to evaluate forecast accuracy across folds. Call .cv_summary() after cross-validation to retrieve the aggregated scores.
step_size int 1 The step size for rolling the forecasting origin.
metric_to_opt Optional[Callable] None An optional metric function to optimize when weighting_scheme is set to “optimize”. If None, it defaults to the first metric in the metrics list.
weighting_scheme Optional[Union[Dict[str, float], str]] None None: equal weights across models. dict: user-provided weights (must sum to 1). “optimize”: optimize weights to minimize MSE via scipy.optimize.minimize.
optimizer str SLSQP Optimization method to use when weighting_scheme is set to “optimize”. Passed to scipy.optimize.minimize. Refer to SciPy documentation for available methods.
Returns pd.DataFrame A DataFrame containing the performance metrics for each model and the combined forecast across all cross-validation splits. Also, optimized weights are stored in self.optimal_weights_ if weighting_scheme is “optimize”.