Skip to content

Normalization

dynax.normalization_coefficients(std_y, std_v=None, std_a=None, w_y=1.0, w_v=1.0, w_a=1.0, tol=1e-06, maxiter=50, verbosity=1)

Compute normalization factors for signals and their derivatives while preserving derivative consistency.

Consider a trajectory \(y(t) \in \mathbb{R}^n\) together with its derivatives \(v(t) = \partial_t \, y(t),\; a(t) = \partial_{tt} \, y(t)\). Normalizing \(y, v, a\) independently (e.g. to unit variance) breaks the derivative relationships: the derivative of the normalized trajectory \(\partial_t \tilde y(t)\) would no longer match the normalized velocity \(\tilde v(t)\).

To maintain consistency, the same scaling factor \(\alpha_i\) must be applied to \(y_i, v_i, a_i\). Additionally, we allow a rescaling of time, \(\tilde t = \tau t\), to gain more flexibility. With this, the rescaled signals are $$ \tilde t = \tau t, \quad \tilde y_i = \alpha_i y_i, \quad \tilde v_i = \tau \alpha_i v_i, \quad \tilde a_i = \tau^2 \alpha_i a_i. $$

This function finds the scaling factors \(\alpha \in \mathbb{R}^n\) and \(\tau \in \mathbb{R}\) that make the standard deviations of \(\tilde y, \tilde v\) and \(\tilde a\) as close as possible to one, while preserving derivative consistency: $$ \partial_{\tilde t} \, \tilde y_i(\tilde t) = \tilde v_i(\tilde t), \quad \partial_{\tilde t \tilde t} \, \tilde y_i(\tilde t) = \tilde a_i(\tilde t). $$

This is done via the following optimization problem: Given standard deviations \(\sigma(y), \sigma(v), \sigma(a)\), minimize $$ \sum_i \big[ w_y (\sigma(\tilde y_i) - 1)^2 \;+\; w_v (\sigma(\tilde v_i) - 1)^2 \;+\; w_a (\sigma(\tilde a_i) - 1)^2 \big], $$ where \(w_y, w_v, w_a\) control the relative importance of each term.

Implementation Notes

The problem reduces to a one-dimensional optimization over \(\tau\), with $$ \alpha_i(\tau) = \frac{N_i(\tau)}{D_i(\tau)}, \quad \tau^\star = \underset{\tau}{\operatorname{argmin}} \sum_i -\frac{N_i(\tau)^2}{D_i(\tau)}. $$ This function solves the reformulated problem using scipy.optimize.minimize with the BFGS method. Additionally:

  • Optimization is performed over \(\log(\tau)\) to enforce \(\tau > 0\).
  • The initial guess \(\tau_0\) is computed heuristically from \(\sigma(y), \sigma(v), \sigma(a)\).
PARAMETER DESCRIPTION
std_y

Standard deviation of the signal \(y(t)\).

TYPE: Float[Array, 'n'] | float

std_v

Standard deviation of the signal \(v(t) = \partial_t y(t)\). Optinoal. Defaults to None.

TYPE: Float[Array, 'n'] | float | None DEFAULT: None

std_a

Standard deviation of the signal \(a(t) = \partial_{tt} y(t)\). Optinoal. Defaults to None.

TYPE: Float[Array, 'n'] | float | None DEFAULT: None

w_y

Optimization weight for signal \(y(t)\). Defaults to 1.0.

TYPE: float DEFAULT: 1.0

w_v

Optimization weight for signal \(v(t)\). Defaults to 1.0.

TYPE: float DEFAULT: 1.0

w_a

Optimization weight for signal \(a(t)\). Defaults to 1.0.

TYPE: float DEFAULT: 1.0

tol

Relative error in solution for \(\tau\) acceptable for convergence. Defaults to 1e-6.

TYPE: float DEFAULT: 1e-06

maxiter

Maximum number of optimization iterations to perform. Defaults to 50.

TYPE: int DEFAULT: 50

verbosity

If non-zero, print messages. 0 : no message printing. 1 : non-convergence notification messages only. 2 : print a message on convergence too. 3 : print iteration results. Defaults to 1.

TYPE: Literal[0, 1, 2, 3] DEFAULT: 1

RETURNS DESCRIPTION
tuple[Float[Array, 'n'], Scalar]

Tuple containing \(\alpha_i\) and \(\tau\).