Overview

This Rmarkdown documents contains the code and 2D simulations of the error propagation and classification uncertainty estimation using Monte Carlo simulations for two generic classes with radial separation classified with Radial SVM.

Packages

windowsFonts(A = windowsFont("DejaVu Sans")) 
palette("Tableau10")

library(caret) #classification using train()
library(kernlab)    
library(MASS) #generate the samples and calculate the augmented noise
library(dplyr) #data handling 
library(knitr) #knit options for the Rmarkdown report
library(kableExtra) #table options

Local sources files containing the functions

source("predict_with_uncertainty_parallel_v4.R")
source("plot_functions.R")

2D simulations with linear separation pattern

set.seed(1234)
n_points <- 10
n_replicates <- 5
noise_sd <- 0.1
rho <- 0

cov_matrix <- matrix(c(noise_sd^2, rho * noise_sd^2,
                       rho * noise_sd^2, noise_sd^2), nrow = 2)

# Generate points along arcs with different radii (radial separation)
radial_curve_points <- function(n_points, radius = 1, theta_range = c(0, 2*pi)) {
  theta_vals <- seq(theta_range[1], theta_range[2], length.out = n_points)
  x_vals <- radius * cos(theta_vals)
  y_vals <- radius * sin(theta_vals)
  return(data.frame(x = x_vals, y = y_vals))
}

# Replicate points with correlated noise
generate_class_curve <- function(n_points, n_replicates, centers, cov_matrix) {
  points <- matrix(0, nrow = n_points * n_replicates, ncol = 2)
  
  for (i in 1:n_points) {
    base_point <- c(centers$x[i] + rnorm(1, 0, 0.1), centers$y[i] + rnorm(1, 0, 0.1))
    
    for (j in 1:n_replicates) {
      noise <- mvrnorm(1, mu = c(0, 0), Sigma = cov_matrix)
      points[(i - 1) * n_replicates + j, ] <- base_point + noise
    }
  }
  return(points)
}

# Class 1: inner arc, Class 2: outer arc
curve1 <- radial_curve_points(n_points, radius = 0.3)
curve2 <- radial_curve_points(n_points, radius = 0.5)

# Generate noisy replicates
class1 <- generate_class_curve(n_points, n_replicates, curve1, cov_matrix)
class2 <- generate_class_curve(n_points, n_replicates, curve2, cov_matrix)


data <- data.frame(
  Class = factor(rep(c("Class 1", "Class 2"), each = n_points * n_replicates)),
  Sample_ID = rep(1:(2 * n_points), each = n_replicates),
  X1 = c(class1[, 1], class2[, 1]),
  X2 = c(class1[, 2], class2[, 2])
)

# Mean center the data
data$X1 <- data$X1 - mean(data$X1)
data$X2 <- data$X2 - mean(data$X2)

# Calculate the mean and standard deviation of each sample
mean_data <- aggregate(cbind(X1, X2) ~ Class + Sample_ID, data = data, FUN = mean)
sd_data <- aggregate(cbind(X1, X2) ~ Class + Sample_ID, data = data, FUN = sd)

rm(class1, class2)

Fitting the PLS-DA model

svm_model <- train(y = mean_data$Class,
                   x = mean_data[,3:4], 
                   method = "svmRadial")
svm_model
## Support Vector Machines with Radial Basis Function Kernel 
## 
## 20 samples
##  2 predictor
##  2 classes: 'Class 1', 'Class 2' 
## 
## No pre-processing
## Resampling: Bootstrapped (25 reps) 
## Summary of sample sizes: 20, 20, 20, 20, 20, 20, ... 
## Resampling results across tuning parameters:
## 
##   C     Accuracy   Kappa    
##   0.25  0.4790476  0.1372783
##   0.50  0.5291270  0.2085414
##   1.00  0.7660317  0.5581022
## 
## Tuning parameter 'sigma' was held constant at a value of 0.3596168
## Accuracy was used to select the optimal model using the largest value.
## The final values used for the model were sigma = 0.3596168 and C = 1.

Local uncertainty

The predict_with_uncertainty_parallel() function is used to estimate the uncertainty of each sample.

loc_pred <- predict_with_uncertainty(data[,-1],
                                                svm_model,
                                                n_sim = 500,
                                                return_simulations = T)

This function returns the mean and standard deviation of the \(\hat{y}\) of each sample based on the Monte Carlo simulation. It also returns the area above and bellow the decision boundary (in this case, zero) of each sample. The classification output indicates if the sample is being classified with certainty or not. For samples where the distribution crosses the decision boundary (considering the 95% confidence level), the sample will be assigned as uncertain. Once return_simulations = T, the augmented noise will be saved. The results for this 2D simulated data is displayed in Table 1.

Table 1. Results of the predict_with_uncertainty_parallel() function.
class sample y_mean y_sd area_above_0 area_below_0 classification
Class 1 1 0.4509 0.2732 0.951 0.049 Uncertain
Class 1 2 0.6898 0.3188 0.985 0.015 Class 1
Class 1 3 1.1212 0.2082 1.000 0.000 Class 1
Class 1 4 0.9476 0.1430 1.000 0.000 Class 1
Class 1 5 1.0045 0.2058 1.000 0.000 Class 1
Class 1 6 0.7771 0.2269 1.000 0.000 Class 1
Class 1 7 0.5363 0.4553 0.881 0.119 Uncertain
Class 1 8 0.2710 0.3546 0.778 0.222 Uncertain
Class 1 9 0.9722 0.2801 1.000 0.000 Class 1
Class 1 10 0.0862 0.1699 0.694 0.306 Uncertain
Class 2 1 -0.8965 0.2362 0.000 1.000 Class 2
Class 2 2 -0.8848 0.2389 0.000 1.000 Class 2
Class 2 3 -1.4247 0.0705 0.000 1.000 Class 2
Class 2 4 -0.2844 0.3411 0.202 0.798 Uncertain
Class 2 5 -0.9952 0.1799 0.000 1.000 Class 2
Class 2 6 -0.9869 0.1648 0.000 1.000 Class 2
Class 2 7 -0.4714 0.2225 0.017 0.983 Class 2
Class 2 8 -0.8171 0.2883 0.002 0.998 Class 2
Class 2 9 -0.8641 0.3775 0.011 0.989 Class 2
Class 2 10 -0.8070 0.2037 0.000 1.000 Class 2

These results can be visualized, as showed in Figure 1.

Figure 1. Uncertainty of the classification outputs for the Polynomial SVM model. (a) 2D data simulation. Circles represents the replicates, while the triangles represents the mean of replicates. (b) Augmented noise generated through eigen decomposition of the original covariance matrix for each sample. 50 points are showed for each sample. (c) Estimated uncertainty for each sample.

Figure 1. Uncertainty of the classification outputs for the Polynomial SVM model. (a) 2D data simulation. Circles represents the replicates, while the triangles represents the mean of replicates. (b) Augmented noise generated through eigen decomposition of the original covariance matrix for each sample. 50 points are showed for each sample. (c) Estimated uncertainty for each sample.

Correlated noise

The previous examples showed how the uncertainty was estimated for Independent and Identically Distributed noise (iid noise). The next examples show how the uncertainty can be estimated for correlated noise, easily found in spectroscopy data.

Positive covariance

For this example, the covariance between the X1 and X2 is 0.8. The results are showed in Table 2.

.

svm_model
## Support Vector Machines with Radial Basis Function Kernel 
## 
## 20 samples
##  2 predictor
##  2 classes: 'Class 1', 'Class 2' 
## 
## No pre-processing
## Resampling: Bootstrapped (25 reps) 
## Summary of sample sizes: 20, 20, 20, 20, 20, 20, ... 
## Resampling results across tuning parameters:
## 
##   C     Accuracy   Kappa    
##   0.25  0.4845238  0.1435932
##   0.50  0.5370635  0.2275193
##   1.00  0.7033333  0.4651368
## 
## Tuning parameter 'sigma' was held constant at a value of 0.350458
## Accuracy was used to select the optimal model using the largest value.
## The final values used for the model were sigma = 0.350458 and C = 1.
.
Table 2. Results of the predict_with_uncertainty_parallel() function for variables with positive covariance.
class sample y_mean y_sd area_above_0 area_below_0 classification
Class 1 1 0.7680 0.1880 1.000 0.000 Class 1
Class 1 2 0.5149 0.2487 0.981 0.019 Class 1
Class 1 3 1.2293 0.2533 1.000 0.000 Class 1
Class 1 4 0.6274 0.1408 1.000 0.000 Class 1
Class 1 5 0.8497 0.2784 0.999 0.001 Class 1
Class 1 6 0.7621 0.1654 1.000 0.000 Class 1
Class 1 7 0.4618 0.5193 0.813 0.187 Uncertain
Class 1 8 0.2872 0.1555 0.968 0.032 Uncertain
Class 1 9 0.9778 0.1186 1.000 0.000 Class 1
Class 1 10 0.3006 0.1646 0.966 0.034 Uncertain
Class 2 1 -0.9786 0.1568 0.000 1.000 Class 2
Class 2 2 -0.7536 0.2911 0.005 0.995 Class 2
Class 2 3 -1.4559 0.0395 0.000 1.000 Class 2
Class 2 4 -0.3908 0.2474 0.057 0.943 Uncertain
Class 2 5 -1.0010 0.1540 0.000 1.000 Class 2
Class 2 6 -0.9141 0.1322 0.000 1.000 Class 2
Class 2 7 -0.3489 0.3292 0.145 0.855 Uncertain
Class 2 8 -0.7049 0.1674 0.000 1.000 Class 2
Class 2 9 -1.0274 0.0593 0.000 1.000 Class 2
Class 2 10 -0.1454 0.2245 0.259 0.741 Uncertain

The results can also be visualized, as showed in Figure 2.

Figure 3. Uncertainty of the classification outputs for the Polynomial SVM model with positive covariance. (a) 2D data simulation. Circles represents the replicates, while the triangles represents the mean of replicates. (b) Augmented noise generated through eigen decomposition of the original covariance matrix for each sample. 50 points are showed for each sample. (c) Estimated uncertainty for each sample.

Figure 3. Uncertainty of the classification outputs for the Polynomial SVM model with positive covariance. (a) 2D data simulation. Circles represents the replicates, while the triangles represents the mean of replicates. (b) Augmented noise generated through eigen decomposition of the original covariance matrix for each sample. 50 points are showed for each sample. (c) Estimated uncertainty for each sample.

Negative covariance

For this example, the covariance between the X1 and X2 is -0.8. The results are showed in Table 3.

.

svm_model
## Support Vector Machines with Radial Basis Function Kernel 
## 
## 20 samples
##  2 predictor
##  2 classes: 'Class 1', 'Class 2' 
## 
## No pre-processing
## Resampling: Bootstrapped (25 reps) 
## Summary of sample sizes: 20, 20, 20, 20, 20, 20, ... 
## Resampling results across tuning parameters:
## 
##   C     Accuracy   Kappa    
##   0.25  0.4826984  0.1410489
##   0.50  0.5587302  0.2545688
##   1.00  0.7126190  0.4514683
## 
## Tuning parameter 'sigma' was held constant at a value of 0.358453
## Accuracy was used to select the optimal model using the largest value.
## The final values used for the model were sigma = 0.358453 and C = 1.
.
Table 3. Results of the predict_with_uncertainty_parallel() function for variables with negative covariance.
class sample y_mean y_sd area_above_0 area_below_0 classification
Class 1 1 0.9173 0.1149 1.000 0.000 Class 1
Class 1 2 0.6752 0.2078 0.999 0.001 Class 1
Class 1 3 1.0475 0.2068 1.000 0.000 Class 1
Class 1 4 0.8632 0.3294 0.996 0.004 Class 1
Class 1 5 0.8646 0.4362 0.976 0.024 Uncertain
Class 1 6 1.0184 0.3065 1.000 0.000 Class 1
Class 1 7 0.6200 0.3431 0.965 0.035 Uncertain
Class 1 8 0.5181 0.3251 0.944 0.056 Uncertain
Class 1 9 0.9875 0.2521 1.000 0.000 Class 1
Class 1 10 0.1090 0.2323 0.681 0.319 Uncertain
Class 2 1 -0.1469 0.3385 0.332 0.668 Uncertain
Class 2 2 -0.8778 0.1570 0.000 1.000 Class 2
Class 2 3 -1.4009 0.0723 0.000 1.000 Class 2
Class 2 4 -0.3424 0.3874 0.188 0.812 Uncertain
Class 2 5 -0.9656 0.3178 0.001 0.999 Class 2
Class 2 6 -0.9925 0.1043 0.000 1.000 Class 2
Class 2 7 -0.5872 0.1497 0.000 1.000 Class 2
Class 2 8 -0.4574 0.3251 0.080 0.920 Uncertain
Class 2 9 -0.3757 0.4875 0.220 0.780 Uncertain
Class 2 10 -0.6672 0.3188 0.018 0.982 Class 2

The results can also be visualized, as showed in Figure 3.

Figure 3. Uncertainty of the classification outputs for the Polynomial SVM model with negative covariance. (a) 2D data simulation. Circles represents the replicates, while the triangles represents the mean of replicates. (b) Augmented noise generated through eigen decomposition of the original covariance matrix for each sample. 50 points are showed for each sample. (c) Estimated uncertainty for each sample.

Figure 3. Uncertainty of the classification outputs for the Polynomial SVM model with negative covariance. (a) 2D data simulation. Circles represents the replicates, while the triangles represents the mean of replicates. (b) Augmented noise generated through eigen decomposition of the original covariance matrix for each sample. 50 points are showed for each sample. (c) Estimated uncertainty for each sample.