Machine Learning Concepts: Clustering, Classification, and Optimization

AD_4nXe_J3nuuYolu8itZePnz2tfXVTwrqB1K_lKPIRMCf4xTwIkUavoRlG7bHFxNA3mNm1uEq5hWHduFsdaWnPND12Pwsf7vah5s3SbjduLpgkPWQ3zHoD_-gmiYqk_IyCG19UYOr21mwtWdwExQIPAfu-KRtmq?key=hX-iwyt4iv79kX_14D_c6A

AD_4nXdsIaKFTZUM7QRNUupkd7TkK5orPwGr9Tov9oDABX5-4TBZAyiTftEbODOKixqLgORNHzadA-qrhXuAfNhCZPcAe9G3xPIzUgKQLD_ShSj3MApAkfOFUsEys295B_Q7Mkha0JekRkYQuX8rigmFuBf3uidE?key=hX-iwyt4iv79kX_14D_c6A

Clustering with K-Means

In k-means clustering, if cluster centers are known, assigning points to clusters is straightforward by computing distances. Conversely, if cluster assignments are known, centers are easily calculated by averaging points within each cluster. Stopping criteria include a percentage decrease threshold or checking for assignment changes. K-means has numerous equivalent global minima. In high-dimensional spaces, local minima are common, making optimization challenging.

Probability and Statistics

We differentiate between discrete (e.g., 1, 2, 3) and continuous (e.g., decimal) distributions, such as Uniform and Normal (Gaussian). Cross-validation, including K-Folds and Leave-One-Out, assesses model performance by repeatedly dividing data into training, validation, and test sets.

  • Joint Probability: P(A,B)
  • Conditional Probability: P(A|B), where A is conditioned on B
  • Independence: If P(A|B) = P(A)
  • Conditional Independence: P(A|B, C) = P(A|C)
  • Bayes’ Rule: P(A|B) = P(B|A) * P(A) / P(B)

Bayes’ Rule calculates the conditional probability of an event given another. For instance, P(B|A) is the probability of a positive test if a patient has a disease, P(A) is the disease’s prior probability, and P(B) is the overall probability of a positive test.

Unbalanced Classes

A class imbalance occurs when one class is significantly more frequent than others. Metrics like accuracy (TP + TN / N + P), precision (TP / TP + FP), and recall (TP / TP + FN) are used to evaluate performance in such scenarios. Precision is crucial when focusing on the quality of positive predictions, while recall is important for capturing most positive instances, even at the cost of lower accuracy.

Classification Algorithms

K-Nearest Neighbors (K-NN)

  • Training points are stored for efficient spatial searching.
  • To classify a point X, the K closest points in the training data are identified.
  • The most frequent class among these neighbors is assigned to X.

Naive Bayes Classifier

Training:

  • Store P(c) for each class c from training data.
  • For each feature x_d, construct a table or model for P(x_i|c) using training data.

Testing:

  • For each class c, compute: P(c, X_test) = P(c) * {p(x_1|c), p(x_2|c), …, p(x_d|c)}, where P(c, X_test) = P(c|X_test).
  • Normalize over c to get P(c|X_test) = P(c, X_test) / sum_c(P(c, X_test)).

Neural Networks

AD_4nXfmJonFyXTy_tzIa5Q2O3jnIvLvL97MfwAxHYHqB44zNdIgf_LLGe5p9z8nLaYCOI8y6p54samu9A1rUQzOaVlmzDpRPUS1IC2VkvdhkkJAOrjYR-rmBzkxMhkof5KzrouY6JbUaD41EqmjYSdymZKnZF6M?key=hX-iwyt4iv79kX_14D_c6A

Uploaded image

Recall that “affine” means linear (dot-product) plus an offset (bias). Non-linearity is essential; otherwise, the network can only perform affine functions. Weights (W), inputs (x), biases (b), and activations (a) are key components. For regression, the network’s output can be used directly. For classification, the softmax function (softmax(z_i)) is applied.

AD_4nXe286HovMVCPasICjkyNIdV1h6jgTFNp9WOWXyEappLFWicmigSE8qXSA90ySC1iTkO7Hag0wV_Sa0GmXGlkFVe3owY5RtrfhMNO17oeinsn3HIGMgpS5xxucBccqiQ99rg5avX8jEFzbgLgyiRSKaN-jX7?key=hX-iwyt4iv79kX_14D_c6A

Loss Functions

AD_4nXeGknYd3aSb_vo2GqLqZk7-Ce0D35pfJfIUiX6CzB86t9IpVw-g5u4tdBYDUHICSguGRgaSR3xqsCoJvX47HZ5knP0R5bA7vNQkxCgyPDsyf6XhFHPXs0U5lflzafaLnma4Hl9ya20mz2BlNNqDz5p7YjM?key=hX-iwyt4iv79kX_14D_c6A

  • Classification: Cross-entropy loss is used.
  • Regression: Squared error loss is used.

The softmax function provides pseudo-probabilities.

AD_4nXdOV04zZw9zVCsoUG8d3TtAtcnPUSjdnMbIbqae91ST1hnVjvVThsakRQMt7G2asaiTMH5-0TCP3WLmeLofxrCdhzkW1Hvaq3L0wllDJeeCTd0LOUNopL5oT-ZMvnBFCnxeDEeSxSi9dOJVUW6w2HNjIcZp?key=hX-iwyt4iv79kX_14D_c6A

AD_4nXfBmsULel3PmOsHaQcIImZMdLIR-CbZ3DAcjxxau5RYS7XpQHcgkojVDYN8xxM9yZA0PV3vXTz_bUbaHEqajDKlLjKBbca6V1N15Cy-cpzZcDvi8kLKO7lSD0-k6JVp-rk390JOcL676MTP5x686Q4KfhBe?key=hX-iwyt4iv79kX_14D_c6A

Squared Error Loss

If the network output is a real number, squared error from the training data is used as the loss function.

Regularization

Neural networks are prone to overfitting. Regularization terms are added to the objective function to encourage small weights, such as adding a penalty to the objective function.

AD_4nXcFBY8wmzBt5VEKl49-K3Am8t6-gbdakbdbS3RqLD4OGtAt8hqq_EVdDZyfc18me8syzqyCTUWAEYiz_rQee-l6v7cX-j3idPMy7NYElIWvfSu6zJNYRpn8azBXbKhv-YNy1_rV2yc0mQ5nJ1luXv8vwVYv?key=hX-iwyt4iv79kX_14D_c6A

Optimization

The objective function is a sum over all data points. Gradient descent adjusts weights to reduce error. Gradient computation can be based on:

  • All data (regular gradient descent, expensive for large datasets).
  • A single random data point (stochastic gradient descent – SGD).
  • A batch of random data points (batch gradient descent).

Random point selection helps avoid poor local minima. Back-propagation efficiently computes gradients, avoiding redundant calculations.

Dimensionality

The dimensionality of data refers to the number of variables or features needed to describe a data point. It’s sometimes called “degrees of freedom.” For example, 3-dimensional data requires three numbers (x, y, z). Data can often be represented with fewer dimensions (D < n) by ignoring noise, meaning the intrinsic dimensionality is less than the raw feature count. Redundant information does not contribute new insights.