computeattention.RdCompute global attention weights and context vectors for time series
computeattention(
series,
attention_type = "cosine",
window_size = 3,
decay_factor = 5,
temperature = 1,
sigma = 1,
sensitivity = 1,
alpha = 0.5,
beta = 0.5
)Numeric vector containing the time series of length n
String specifying the type of attention mechanism to use. Options are: "cosine", "exponential", "dot_product", "scaled_dot_product", "gaussian", "linear", "value_based", "hybrid", "parametric". Default is "cosine".
Integer parameter for window size (applicable for "cosine" attention).
Double for decay factor (applicable for "exponential" attention).
Double for temperature (applicable for "scaled_dot_product" attention).
Double for sigma (applicable for "gaussian" attention).
Double for sensitivity (applicable for "value_based" or "hybrid" attention).
Double for alpha (applicable for "parametric" attention).
Double for beta (applicable for "parametric" attention).
List containing:
n × n matrix where entry (i,j) represents the attention weight of time j on time i. Only entries j <= i are non-zero (causal attention).
Vector of length n where each entry i is the weighted sum of all values up to time i, using the attention weights.
# For a series of length 5 using "cosine" attention
series <- c(1, 2, 3, 4, 5)
result <- computeattention(series, attention_type = "cosine", window_size = 3)
# attention_weights will be 5x5 matrix
# context_vectors will be length 5
dim(result$attention_weights) # [1] 5 5
#> [1] 5 5
length(result$context_vectors) # [1] 5
#> [1] 5