Minimum Set Square Method: Error Ellipse and Adjustment

Minimum Set Square Method

To minimize the effects of random errors using observations, parameters, residuals, and a constant model, we establish functionality. Precision is equal (σ = vt * v → σ = v12 + v22… → v12 = minimum) or different (σ = vt * P * v).

Fundamental Methods and Least Squares Adjustment

Parameters: As many equations as observations; they may appear in different forms. Equations: Parameters, observations, residuals, and equations are reflected. The minimum number of parameters coincides with the number of observations if each observation is reflected in just one equation; all equations are linear. [v (0,1), A (n, n0), x (n0, 1), L (n, 1)].

Condition Equations: Observed values, residuals, and equations should be linear [B (r, n), V (n, 1), D (r, 1)].

Mathematical Model and Adjustment

Mathematical means: Observations inevitably contain random errors, and we minimize adjustment. (No. of equations ≥ 1; minimum of observations = unique solution).

Adjustment: Assessment of observations (statistics and probability). The values of variables depend on the required accuracy.

Concepts

  • Weight: A value that multiplies each observation according to its precision. More precise observations have higher weights. It is dimensionless and related to precision.
  • Variance: Measures the dispersion of observations, inversely related to precision (+ dispersion = – precision).
  • Covariance: A value expressing the correlation between two or more observations (ρ = ρ02R2).
  • Variance-Covariance Matrix: The principal diagonal contains the variances of all observations; the remaining elements are covariances. If there is no correlation between observations, the principal diagonal contains variances, and the rest are 0.
  • Weight Matrix: The principal diagonal contains the weights of observations; others are 0.
  • Cofactor Matrix: The inverse of the weight matrix. Obtained by dividing each term by the reference variance.
  • Relationship between Weight Matrix and Covariance Matrix: The weight matrix may be non-diagonal where there is correlation between observations (σll = σ02 * Qll → QllPll = I-1 → σll = σ02 * Pll-1 → Pll = σ02 * σll-1).
  • A Posteriori Reference Variance:02 = (vt * P * v) / r).

Variance-Covariance Law of Propagation

Linear Case

Determining the covariance matrix of a vector Q, which is a linear function of a random vector X. Each covariance matrix is known. Y = AX, where A (m, 1), A (m, n), x (n, 1). Σy = E[(y – E[y]) * (y – E[y])t]. We know E[y] = AE[x], then: Σy = E[(Ax – AE[x]) * (Ax – AE[x])t] = E[A(x – E[x]) * (x – E[x])tAt] = AE[(x – E[x])(x – E[x])t]At. Therefore, Σy = A * Σxx * At.

Nonlinear Case

Q is a random variable that is a nonlinear function of another variable X. Y = g(x). Linearize the function around the initial value x0: [y = G(x0) + (dG/dx)0 * (x – x0)]. This becomes a linear function [y = G(x0) + J(x – x0)]. J is the Jacobian matrix calculated for x0. E[y] = E[G(x0) + J(x – x0)] = G(x0) + J(E[x] – x0); y – E[y] = G(x0) + J(x – x0) – G(x0) – J(E[x] – x0) = J(x – E[x]) and Σy = E[J(x – E[x]) * (x – E[x])tJt] = JE[(x – E[x]) * (x – E[x])t]Jt = J * Σxx * Jt.

Error Ellipse

Standard Error Ellipse Centered at the Origin

1. Rotation Angle (θ): Consider the covariance matrix and equate terms: 1) x’2 = σx2cos2θ + 2σxysinθcosθ + σy2sin2θ 2) y’2 = σx2sin2θ – 2σxycosθsinθ + σy2cos2θ 3) θ = (σy2 – σx2)sinθcosθ + σxy(cos2θ – sin2θ) tan2θ = (2σxy) / (σx2 – σy2). Removing θ: x’ = √(a + b); y’ = √(ab).

2. Covariance Matrix of Parameters (Σxx): A is a square matrix (nxn). We need values that satisfy the condition [Ax = λx → (A – λI) = 0; |A – λI| = 0]. Develop the characteristic equation. These values (λ1, λ2…λn) are eigenvalues and form the polynomial with the ellipse axes. Compare with the previous equation: tan2θ = (2σxy) / (σx2 – σy2).

Error Ellipse Probability Associated with a True Value

Consider uncorrelated random errors in x, transformed into y. Correlated errors are rotated by angle θ. Assuming ρ = 0, the error ellipse is [(x / σx)2 + (y / σy)2 = c2]. Since x, y are independent and normally distributed, u = [(x’ / σx‘)2 + (y’ / σy‘)2] follows a chi-square distribution with 2 degrees of freedom, and its density function is [f(u) = (e-u/2) / 2]. (c = height at which we capture the normal distribution). The probability that the position given by x, y is inside the ellipse is: P[(x / σx)2 + (y / σy)2 ≤ c2] = P[u ≤ c2] = ∫0c2(e– (u / 2)du) = 1 – e-c2 / 2 = P. (major axis: σx‘ = c * σx‘) (minor axis: σy‘ = c * σy‘).

Family of Error Ellipses and Standard Error Ellipse

The equation representing all possible error ellipses centered at the origin is known as a family of error ellipses: [(x / σx)2 – 2ρ(x / σx)(y / σy) + (y / σy)2 = (1 – ρ2) * c2]. If c = 1, we obtain the standard error ellipse equation. The error ellipse size and orientation depend on the parameters σx, σy, σz. Used in topography to determine confidence intervals and obtain standard positions of points.

Deduction of the Variance-Covariance Matrix

From Residuals (Condition Equations)

V = Q * Bt * K, K-1 = -QeePe * D; Qkk = -Pe * Qdd * (-Pe)t; Qkk = Pet = Pe // QVV = Qee + B * Qkk(Bt)t = Qee + BQkkBt // Σvv = σ02 * QVV = σ02 * B QkkBt = σ02 * BQeeB-1 * BtQeeB.

From Parameters (Parameter Equations)

x = N-1t // QXX = N-1 * QTT(N-1)t = N-1N(N-1)t = N-1 // T = At * P * L // QTT = (At * P) * Qee(At * P)t = At * P * A = N // Σxx = σ02 * QXX = σ02 * N-1.

Deduction of the Variance-Covariance Matrix Associated with the Residual Vector (Parameters)

ΣLL = σ02 * QLL; BV + d = 0, Q = QLL // d = B * L – L0; Qdd = QLL * B * Bt = NB // K = -NB-1