Many real-world time series exhibit seasonal patterns: monthly sales with annual cycles, quarterly GDP with repeating patterns, daily electricity demand with weekly cycles. Traditional seasonal models like SAR (Seasonal Autoregressive), SMA (Seasonal Moving Average), SARMA, and SARIMA are powerful tools, but their classical estimation methods assume Gaussian innovations.
This vignette demonstrates how PMM2 estimation for seasonal models provides:
A **SAR(p,P)_s** model with seasonal period s is:
\[ y_t = \phi_1 y_{t-1} + \cdots + \phi_p y_{t-p} + \Phi_1 y_{t-s} + \cdots + \Phi_P y_{t-Ps} + \varepsilon_t \]
where: - \(\phi_i\) are non-seasonal AR coefficients - \(\Phi_j\) are seasonal AR coefficients - \(s\) is the seasonal period (e.g., 12 for monthly data, 4 for quarterly)
# Simulate SAR(1,1)_12 model
n <- 360 # 30 years of monthly data
phi <- 0.5 # Non-seasonal AR(1)
Phi <- 0.6 # Seasonal AR(1) at lag 12
s <- 12 # Monthly seasonal period
# Generate gamma-distributed innovations (right-skewed)
# This mimics asymmetric economic shocks
innovations <- rgamma(n + 100, shape = 2, scale = 1) - 2
# Generate SAR series
y <- numeric(n + 100)
for (t in (s + 2):(n + 100)) {
y[t] <- phi * y[t-1] + Phi * y[t-s] + innovations[t]
}
y <- y[101:(n + 100)] # Remove burn-in
# Convert to time series object for better plotting
y_ts <- ts(y, frequency = 12, start = c(2000, 1))
# Plot the series
plot(y_ts, main = "SAR(1,1)_12 Process with Skewed Innovations",
ylab = "Value", xlab = "Year", col = "steelblue", lwd = 1.5)
abline(h = 0, col = "gray", lty = 2)# Extract seasonal patterns
months <- rep(1:12, length.out = length(y))
seasonal_df <- data.frame(
Month = factor(months, labels = month.abb),
Value = y
)
# Boxplot by month
boxplot(Value ~ Month, data = seasonal_df,
main = "Seasonal Pattern in SAR(1,1)_12 Data",
xlab = "Month", ylab = "Value",
col = "lightblue", border = "steelblue")# Fit SAR(1,1)_12 with PMM2
fit_sar_pmm2 <- sar_pmm2(
y,
order = c(1, 1), # (non-seasonal AR order, seasonal AR order)
season = list(period = 12), # Monthly seasonality
method = "pmm2",
verbose = FALSE
)
# Fit with OLS for comparison
fit_sar_ols <- sar_pmm2(
y,
order = c(1, 1),
season = list(period = 12),
method = "ols",
verbose = FALSE
)
# Fit with CSS for comparison
fit_sar_css <- sar_pmm2(
y,
order = c(1, 1),
season = list(period = 12),
method = "css",
verbose = FALSE
)
# Compare estimates
cat("True parameters:\n")
#> True parameters:
cat(" phi (AR1):", phi, "\n")
#> phi (AR1): 0.5
cat(" Phi (SAR1):", Phi, "\n\n")
#> Phi (SAR1): 0.6
cat("OLS Estimates:\n")
#> OLS Estimates:
print(coef(fit_sar_ols))
#> ar1 sar1
#> 0.5121056 0.5860107
cat("\nCSS Estimates:\n")
#>
#> CSS Estimates:
print(coef(fit_sar_css))
#> ar1 sar1
#> 0.5121056 0.5860107
cat("\nPMM2 Estimates:\n")
#>
#> PMM2 Estimates:
print(coef(fit_sar_pmm2))
#> ar1 sar1
#> 0.4937866 0.6072927
# Display PMM2 summary with diagnostics
cat("\n")
summary(fit_sar_pmm2)
#>
#> =======================================================================
#> Seasonal AR Model fitted with PMM2
#> =======================================================================
#>
#> Call:
#> sar_pmm2(x = y, order = c(1, 1), season = list(period = 12),
#> method = "pmm2", verbose = FALSE)
#>
#> Model: SAR(1,1)_12
#> Observations: 360
#> Effective sample size: 360
#>
#> Coefficients:
#> -----------------------------------------
#> ar1 : 0.49379
#> sar1 : 0.60729
#> Mean : 0.09881
#>
#> Residual Distribution Characteristics:
#> -----------------------------------------
#> m2 (variance): 2.44677
#> m3 (skewness ind.): 6.05862
#> m4 (kurtosis ind.): 38.73868
#>
#> Skewness coef. (c3): 1.58301
#> Excess kurtosis(c4): 3.47083
#> Var. reduction (g): 0.54195
#>
#> => Expected 45.8% variance reduction vs OLS [OK]
#>
#> Algorithm Information:
#> -----------------------------------------
#> Converged: TRUE
#> Iterations: 3
#>
#> Residual Statistics:
#> -----------------------------------------
#> Min: -2.0082
#> Q1: -1.1558
#> Median: -0.3302
#> Mean: 0.0013
#> Q3: 0.7034
#> Max: 8.1440
#>
#> =======================================================================The PMM2 method computes the variance correction factor \(g\):
# Extract moments from fit
m2 <- fit_sar_pmm2@m2
m3 <- fit_sar_pmm2@m3
m4 <- fit_sar_pmm2@m4
# Compute g-factor
g_result <- pmm2_variance_factor(m2, m3, m4)
cat("PMM2 variance correction factor:\n")
#> PMM2 variance correction factor:
cat(" g =", round(g_result$g, 4), "\n")
#> g = 0.5419
cat(" Theoretical variance reduction:",
round((1 - g_result$g) * 100, 2), "%\n")
#> Theoretical variance reduction: 45.81 %
# For Gaussian innovations, g ≈ 1 (no improvement)
# For skewed/heavy-tailed innovations, g < 1 (PMM2 more efficient)# Create diagnostic plots
par(mfrow = c(2, 2))
# 1. Residuals over time
residuals_pmm2 <- fit_sar_pmm2@residuals
plot(residuals_pmm2, type = "l",
main = "PMM2 Residuals",
xlab = "Time", ylab = "Residual",
col = "steelblue")
abline(h = 0, col = "red", lty = 2)
# 2. ACF of residuals
acf(residuals_pmm2, main = "ACF of PMM2 Residuals",
col = "steelblue", lwd = 2)
# 3. Q-Q plot
qqnorm(residuals_pmm2, main = "Q-Q Plot of PMM2 Residuals",
pch = 20, col = "steelblue")
qqline(residuals_pmm2, col = "red", lwd = 2)
# 4. Histogram of residuals
hist(residuals_pmm2, breaks = 30, col = "lightblue",
border = "steelblue", main = "Distribution of PMM2 Residuals",
xlab = "Residual", freq = FALSE)
lines(density(residuals_pmm2), col = "darkblue", lwd = 2)A **SMA(Q)_s** model is:
\[ y_t = \varepsilon_t + \Theta_1 \varepsilon_{t-s} + \cdots + \Theta_Q \varepsilon_{t-Qs} \]
where \(\Theta_j\) are seasonal MA coefficients.
# Simulate SMA(1)_4 model (quarterly seasonality)
n <- 200
Theta <- 0.6 # Seasonal MA(1) coefficient
s <- 4 # Quarterly period
# Generate exponentially distributed innovations (right-skewed)
innovations <- rexp(n + 50, rate = 1) - 1
# Generate SMA series
y <- numeric(n + 50)
for (t in 1:s) {
y[t] <- innovations[t]
}
for (t in (s + 1):(n + 50)) {
y[t] <- innovations[t] + Theta * innovations[t - s]
}
y <- y[51:(n + 50)] # Remove burn-in
# Convert to quarterly time series
y_ts <- ts(y, frequency = 4, start = c(2000, 1))
# Plot
plot(y_ts, main = "SMA(1)_4 Process with Exponential Innovations",
ylab = "Value", xlab = "Year", col = "darkgreen", lwd = 1.5)
abline(h = 0, col = "gray", lty = 2)quarters <- rep(1:4, length.out = length(y))
seasonal_df_q <- data.frame(
Quarter = factor(quarters, labels = paste("Q", 1:4, sep = "")),
Value = y
)
boxplot(Value ~ Quarter, data = seasonal_df_q,
main = "Seasonal Pattern in SMA(1)_4 Data",
xlab = "Quarter", ylab = "Value",
col = "lightgreen", border = "darkgreen")# Fit SMA(1)_4 with PMM2
fit_sma_pmm2 <- sma_pmm2(
y,
order = 1, # Seasonal MA order
season = list(period = 4), # Quarterly seasonality
method = "pmm2",
verbose = FALSE
)
# Fit with CSS for comparison
fit_sma_css <- sma_pmm2(
y,
order = 1,
season = list(period = 4),
method = "css",
verbose = FALSE
)
# Compare estimates
cat("True parameter: Theta =", Theta, "\n\n")
#> True parameter: Theta = 0.6
cat("CSS Estimate:\n")
#> CSS Estimate:
print(coef(fit_sma_css))
#> sma1
#> 0.6439427
cat("\nPMM2 Estimate:\n")
#>
#> PMM2 Estimate:
print(coef(fit_sma_pmm2))
#> sma1
#> 0.680596
cat("\n")
summary(fit_sma_pmm2)
#>
#> =======================================================================
#> Seasonal MA Model fitted with PMM2
#> =======================================================================
#>
#> Call:
#> sma_pmm2(x = y, order = 1, season = list(period = 4), method = "pmm2",
#> verbose = FALSE)
#>
#> Model: SMA(1)_4
#> Observations: 200
#> Effective sample size: 200
#>
#> Coefficients:
#> -----------------------------------------
#> sma1 : 0.68060
#> Mean : -0.05625
#>
#> Innovation Distribution Characteristics:
#> -----------------------------------------
#> m2 (variance): 0.98529
#> m3 (skewness ind.): 2.18908
#> m4 (kurtosis ind.): 10.08601
#>
#> Skewness coef. (c3): 2.23828
#> Excess kurtosis(c4): 7.38941
#> Var. reduction (g): 0.46643
#>
#> => Expected 53.4% variance reduction vs CSS/ML [OK]
#>
#> Algorithm Information:
#> -----------------------------------------
#> Converged: TRUE
#> Iterations: 3
#>
#> Innovation Statistics:
#> -----------------------------------------
#> Min: -1.2313
#> Q1: -0.6553
#> Median: -0.2772
#> Mean: 0.0009
#> Q3: 0.3428
#> Max: 5.9338
#>
#> =======================================================================cat("PMM2 Convergence:\n")
#> PMM2 Convergence:
cat(" Converged:", fit_sma_pmm2@convergence, "\n")
#> Converged: TRUE
cat(" Iterations:", fit_sma_pmm2@iterations, "\n")
#> Iterations: 3
cat(" Residual variance (m2):", round(fit_sma_pmm2@m2, 4), "\n")
#> Residual variance (m2): 0.9853
# Extract g-factor
g_sma <- pmm2_variance_factor(fit_sma_pmm2@m2,
fit_sma_pmm2@m3,
fit_sma_pmm2@m4)
cat("\nEfficiency:\n")
#>
#> Efficiency:
cat(" g-factor:", round(g_sma$g, 4), "\n")
#> g-factor: 0.4664
cat(" Variance reduction:", round((1 - g_sma$g) * 100, 2), "%\n")
#> Variance reduction: 53.36 %# This demonstrates the Monte Carlo approach
# (Set eval=TRUE to run, but it takes several minutes)
n_sims <- 500
n <- 200
sar_results <- data.frame(
method = character(),
phi_estimate = numeric(),
Phi_estimate = numeric(),
stringsAsFactors = FALSE
)
for (i in 1:n_sims) {
# Generate data
innov <- rgamma(n + 100, shape = 2, scale = 1) - 2
y_sim <- numeric(n + 100)
for (t in 13:(n + 100)) {
y_sim[t] <- 0.5 * y_sim[t-1] + 0.6 * y_sim[t-12] + innov[t]
}
y_sim <- y_sim[101:(n + 100)]
# Fit PMM2
fit_pmm2 <- sar_pmm2(y_sim, order = c(1, 1),
season = list(period = 12),
method = "pmm2", verbose = FALSE)
# Fit CSS
fit_css <- sar_pmm2(y_sim, order = c(1, 1),
season = list(period = 12),
method = "css", verbose = FALSE)
# Store results
sar_results <- rbind(sar_results,
data.frame(method = "PMM2",
phi_estimate = coef(fit_pmm2)[1],
Phi_estimate = coef(fit_pmm2)[2]),
data.frame(method = "CSS",
phi_estimate = coef(fit_css)[1],
Phi_estimate = coef(fit_css)[2])
)
}
# Compute variance reduction
var_pmm2_phi <- var(sar_results$phi_estimate[sar_results$method == "PMM2"])
var_css_phi <- var(sar_results$phi_estimate[sar_results$method == "CSS"])
var_reduction_phi <- (1 - var_pmm2_phi / var_css_phi) * 100
var_pmm2_Phi <- var(sar_results$Phi_estimate[sar_results$method == "PMM2"])
var_css_Phi <- var(sar_results$Phi_estimate[sar_results$method == "CSS"])
var_reduction_Phi <- (1 - var_pmm2_Phi / var_css_Phi) * 100
cat("SAR(1,1)_12 Monte Carlo Results (n=200, 500 sims):\n")
cat(" Variance reduction for phi:", round(var_reduction_phi, 2), "%\n")
cat(" Variance reduction for Phi:", round(var_reduction_Phi, 2), "%\n")Documented Results: Based on comprehensive Monte Carlo studies with 100-500 replications:
| Model | Innovation | Sample Size | Variance Reduction |
|---|---|---|---|
| SAR(1,1)_12 | Gamma (shape=2) | n=100 | 20-25% |
| SAR(1,1)_12 | Gamma (shape=2) | n=200 | 25-30% |
| SAR(1,1)_12 | Gamma (shape=2) | n=500 | 30-35% |
| SMA(1)_4 | Exponential | n=100 | 25-30% |
| SMA(1)_4 | Exponential | n=200 | 30-35% |
| SMA(1)_4 | Exponential | n=500 | 35-40% |
SARMA models combine both SAR and SMA components:
# Simulate SARMA(1,0,0,1)_12 model
# This is SAR(1) × SMA(1) with period 12
n <- 300
phi <- 0.5
Theta <- 0.4
s <- 12
# Generate lognormal innovations (right-skewed)
innovations <- rlnorm(n + 100, meanlog = 0, sdlog = 0.5) - exp(0.125)
# For simplicity, use arima.sim
y <- arima.sim(
n = n,
model = list(
ar = phi,
seasonal = list(order = c(1, 0, 1), period = 12,
sar = 0, sma = Theta)
),
innov = innovations[1:n]
)
# Fit SARMA model with PMM2
fit_sarma_pmm2 <- sarma_pmm2(
y,
order = c(1, 1, 0, 1), # (p, P, q, Q)
season = list(period = 12),
method = "pmm2",
verbose = FALSE
)
# Fit with CSS
fit_sarma_css <- sarma_pmm2(
y,
order = c(1, 1, 0, 1),
season = list(period = 12),
method = "css",
verbose = FALSE
)
# Compare
cat("True parameters: phi =", phi, ", Theta =", Theta, "\n\n")
#> True parameters: phi = 0.5 , Theta = 0.4
cat("CSS Estimates:\n")
#> CSS Estimates:
print(coef(fit_sarma_css))
#> intercept ar1
#> -0.03947186 0.50782387
cat("\nPMM2 Estimates:\n")
#>
#> PMM2 Estimates:
print(coef(fit_sarma_pmm2))
#> intercept ar1
#> -0.01983357 0.52863176For non-stationary seasonal series:
Note: SARIMA models with complex differencing structures may have convergence challenges. The example below is provided for illustration but not executed during vignette build. See Monte Carlo results for empirically validated configurations.
# Simulate SARIMA(1,1,0)×(1,1,1)_12
# This includes regular differencing (d=1) and seasonal differencing (D=1)
n <- 400
phi <- 0.4
Phi <- 0.5
Theta <- 0.6
# Generate series (using standard arima.sim)
innovations <- rgamma(n + 100, shape = 2, scale = 1) - 2
y <- arima.sim(
n = n,
model = list(
order = c(1, 1, 0),
ar = phi,
seasonal = list(order = c(1, 1, 1), period = 12,
sar = Phi, sma = Theta)
),
innov = innovations[1:n]
)
# Plot the non-stationary series
plot(y, type = "l", main = "SARIMA(1,1,0)×(1,1,1)_12 Process",
ylab = "Value", xlab = "Time", col = "purple", lwd = 1.5)
# Fit SARIMA with PMM2
fit_sarima_pmm2 <- sarima_pmm2(
y,
order = c(1, 1, 0, 1), # Model orders: (p, P, q, Q)
seasonal = list(
order = c(1, 1), # Differencing orders: (d, D)
period = 12
),
method = "pmm2",
verbose = FALSE
)
# Fit with CSS
fit_sarima_css <- sarima_pmm2(
y,
order = c(1, 1, 0, 1), # Model orders: (p, P, q, Q)
seasonal = list(order = c(1, 1), period = 12),
method = "css",
verbose = FALSE
)
# Compare
cat("True parameters:\n")
cat(" phi =", phi, ", Phi =", Phi, ", Theta =", Theta, "\n\n")
cat("CSS Estimates:\n")
print(coef(fit_sarima_css))
cat("\nPMM2 Estimates:\n")
print(coef(fit_sarima_pmm2))
summary(fit_sarima_pmm2)# For the SAR(1,1)_12 data, compare all available methods
methods <- c("ols", "css", "pmm2")
comparison_results <- data.frame(
Method = methods,
phi_estimate = numeric(length(methods)),
Phi_estimate = numeric(length(methods)),
AIC = numeric(length(methods))
)
for (i in seq_along(methods)) {
fit <- sar_pmm2(y, order = c(1, 1), season = list(period = 12),
method = methods[i], verbose = FALSE)
comparison_results$phi_estimate[i] <- coef(fit)[1]
comparison_results$Phi_estimate[i] <- coef(fit)[2]
comparison_results$AIC[i] <- AIC(fit)
}
print(comparison_results)
#> Method phi_estimate Phi_estimate AIC
#> 1 ols 0.5098779 -0.01978447 524.2121
#> 2 css 0.5098779 -0.01978447 524.2121
#> 3 pmm2 0.5287481 0.02379518 534.5919
# Best method by AIC
best_method <- comparison_results$Method[which.min(comparison_results$AIC)]
cat("\nBest method by AIC:", best_method, "\n")
#>
#> Best method by AIC: ols# Compare different SAR orders
models <- list(
list(name = "SAR(1,0)", order = c(1, 0)),
list(name = "SAR(0,1)", order = c(0, 1)),
list(name = "SAR(1,1)", order = c(1, 1)),
list(name = "SAR(2,1)", order = c(2, 1))
)
ic_results <- data.frame(
Model = character(),
AIC = numeric(),
BIC = numeric(),
stringsAsFactors = FALSE
)
for (model in models) {
fit <- sar_pmm2(y, order = model$order,
season = list(period = 12),
method = "pmm2", verbose = FALSE)
ic_results <- rbind(ic_results, data.frame(
Model = model$name,
AIC = AIC(fit),
BIC = BIC(fit)
))
}
print(ic_results)
#> Model AIC BIC
#> 1 SAR(1,0) 536.9440 540.6478
#> 2 SAR(0,1) 621.8317 625.5354
#> 3 SAR(1,1) 534.5919 541.9994
#> 4 SAR(2,1) 532.2519 543.3633
# Best model
best_aic <- ic_results$Model[which.min(ic_results$AIC)]
best_bic <- ic_results$Model[which.min(ic_results$BIC)]
cat("\nBest model by AIC:", best_aic, "\n")
#>
#> Best model by AIC: SAR(2,1)
cat("Best model by BIC:", best_bic, "\n")
#> Best model by BIC: SAR(1,0)# Use classic airline passengers dataset
# (In practice, load with: data(AirPassengers))
# Here we'll simulate similar data
# Simulate monthly airline passengers with trend and seasonality
n <- 144 # 12 years of monthly data
t <- 1:n
# Trend component
trend <- 100 + 2 * t
# Seasonal component (annual cycle)
seasonal <- 20 * sin(2 * pi * (t - 1) / 12)
# Random component (log-normal innovations for multiplicative effect)
set.seed(123)
random <- rlnorm(n, meanlog = 0, sdlog = 0.1)
# Combine (multiplicative model)
passengers <- (trend + seasonal) * random
# Convert to time series
passengers_ts <- ts(passengers, frequency = 12, start = c(2000, 1))
# Plot
plot(passengers_ts, main = "Simulated Airline Passengers",
ylab = "Passengers", xlab = "Year",
col = "darkblue", lwd = 1.5)# Take log to stabilize variance
log_passengers <- log(passengers_ts)
# First differencing for trend
diff_passengers <- diff(log_passengers)
# Seasonal differencing
diff_seasonal <- diff(diff_passengers, lag = 12)
# Plot differenced series
plot(diff_seasonal, main = "Differenced Log Passengers",
ylab = "Differenced Value", xlab = "Year",
col = "steelblue", lwd = 1.5)
abline(h = 0, col = "red", lty = 2)
# Check ACF and PACF
par(mfrow = c(1, 2))
acf(diff_seasonal, main = "ACF of Differenced Series", na.action = na.pass)
pacf(diff_seasonal, main = "PACF of Differenced Series", na.action = na.pass)par(mfrow = c(1, 1))
# Fit SARIMA(0,1,1)×(0,1,1)_12 (classic airline model)
fit_airline_pmm2 <- sarima_pmm2(
log_passengers,
order = c(0, 0, 1, 1), # Model orders: (p, P, q, Q)
seasonal = list(
order = c(1, 1), # Differencing orders: (d, D)
period = 12
),
method = "pmm2",
verbose = FALSE
)
# Summary
summary(fit_airline_pmm2)
#> PMM2 Seasonal ARIMA estimation results
#> Model type: SARIMA(0,1,1)x(0,1,1)_12
#> Call:
#> sarima_pmm2(x = log_passengers, order = c(0, 0, 1, 1), seasonal = list(order = c(1,
#> 1), period = 12), method = "pmm2", verbose = FALSE)
#>
#> Coefficients:
#> MA: ma1
#> -0.7949216
#> SMA: sma1
#> -0.8754329
#>
#> Central moments of residuals:
#> m2 = 0.01248078
#> m3 = -9.081807e-05
#> m4 = 0.0004719972
#>
#> Theoretical characteristics of PMM2 (S = 2):
#> c3 = -0.06513427
#> c4 = 0.03009182
#> g = 0.9979102 (expected ratio Var[PMM2]/Var[OLS])
#>
#> Algorithm information:
#> Convergence status: TRUE
#> Iterations: 4
# Forecast
forecast_horizon <- 24 # 2 years ahead
forecast_result <- predict(fit_airline_pmm2, n.ahead = forecast_horizon)
# Extract predictions
if (is.list(forecast_result)) {
forecasts <- as.numeric(forecast_result$pred)
} else {
forecasts <- as.numeric(forecast_result)
}
# Plot with forecasts
plot(log_passengers, xlim = c(2000, 2014),
ylim = range(c(log_passengers, forecasts)),
main = "Log Passengers with 2-Year Forecast",
ylab = "Log(Passengers)", xlab = "Year",
col = "darkblue", lwd = 1.5)
lines(ts(forecasts, frequency = 12, start = c(2012, 1)),
col = "red", lwd = 2, lty = 2)
legend("topleft", legend = c("Observed", "Forecast"),
col = c("darkblue", "red"), lty = c(1, 2), lwd = c(1.5, 2))Use SAR/SMA/SARMA/SARIMA when:
Use PMM2 when:
Use classical methods (CSS/OLS) when:
After fitting a seasonal model:
# Example diagnostic workflow
residuals_check <- fit_sar_pmm2@residuals
# 1. Visual inspection
plot(residuals_check, type = "l", main = "Residuals Over Time",
col = "steelblue", ylab = "Residual", xlab = "Time")
abline(h = 0, col = "red", lty = 2)
# 3. Ljung-Box test
lb_test <- Box.test(residuals_check, lag = 20, type = "Ljung-Box")
cat("\nLjung-Box test:\n")
#>
#> Ljung-Box test:
cat(" Test statistic:", round(lb_test$statistic, 4), "\n")
#> Test statistic: 24.2242
cat(" p-value:", round(lb_test$p.value, 4), "\n")
#> p-value: 0.2327
cat(" Interpretation:",
ifelse(lb_test$p.value > 0.05,
"Residuals are white noise (good)",
"Residuals show autocorrelation (model may be misspecified)"), "\n")
#> Interpretation: Residuals are white noise (good)
# 4. Distribution check
cat("\nResidual distribution:\n")
#>
#> Residual distribution:
cat(" Skewness:", round(pmm_skewness(residuals_check), 4), "\n")
#> Skewness: 1.5806
cat(" Kurtosis:", round(pmm_kurtosis(residuals_check), 4), "\n")
#> Kurtosis: 3.4658Some series have multiple seasonal patterns (e.g., daily data with weekly and annual cycles). While EstemPMM currently supports single seasonality, you can:
# Extract seasonal component from fitted model
resid_sar <- as.numeric(fit_sar_pmm2@residuals)
y_fit <- tail(as.numeric(y), length(resid_sar))
fitted_values <- y_fit - resid_sar
seasonal_component <- fitted_values - mean(fitted_values)
# Plot seasonally adjusted series
y_adjusted <- y_fit - seasonal_component
par(mfrow = c(2, 1))
plot(y_fit, type = "l", main = "Original Series (aligned to fitted residuals)",
col = "steelblue", lwd = 1.5, ylab = "Value")
plot(y_adjusted, type = "l", main = "Seasonally Adjusted Series",
col = "darkgreen", lwd = 1.5, ylab = "Value")# Bootstrap confidence intervals for SAR parameters
boot_sar <- ts_pmm2_inference(
fit_sar_pmm2,
B = 500, # Number of bootstrap samples
method = "block", # Block bootstrap for time series
block_length = 12, # Block length = seasonal period
seed = 2025,
parallel = FALSE
)
summary(boot_sar)Key Takeaways:
Recommended Workflow:
Zabolotnii, S., et al. (2018). “Polynomial Estimation of Linear Regression Parameters for Asymmetric PDF of Errors”. Springer, AUTOMATION 2018.
Box, G.E.P., Jenkins, G.M., Reinsel, G.C. (2015). Time Series Analysis: Forecasting and Control. 5th edition, Wiley.
Hyndman, R.J., Athanasopoulos, G. (2021). Forecasting: Principles and Practice. 3rd edition, OTexts.
EstemPMM Package Documentation:
?sar_pmm2 - Seasonal autoregressive models?sma_pmm2 - Seasonal moving average models?sarma_pmm2 - Seasonal ARMA models?sarima_pmm2 - Seasonal ARIMA models?ts_pmm2_inference - Bootstrap inferenceGitHub Repository: https://github.com/SZabolotnii/EstemPMM
vignette("bootstrap_inference") for detailed confidence
interval methodsvignette("pmm2_time_series") for AR, MA, ARMA, ARIMAvignette("pmm2_introduction") for regression
applicationsThis vignette was generated with EstemPMM version 0.3.2.