A major advantage of General Regression Neural Networks (GRNN) over other types of neural networks is that there is only a single hyper-parameter, namely the sigma. In the previous post (https://statcompute.wordpress.com/2019/07/06/latin-hypercube-sampling-in-hyper-parameter-optimization), I’ve shown how to use the random search strategy to find a close-to-optimal value of the sigma by using various random number generators, including uniform random, Sobol sequence, and Latin hypercube sampling.

In addition to the random search, we can also directly optimize the sigma based on a pre-defined objective function by using the grnn.optmiz_auc() function (https://github.com/statcompute/yager/blob/master/code/grnn.optmiz_auc.R), in which either Golden section search by default or Brent’s method is employed in the one-dimension optimization. In the example below, the optimized sigma is able to yield a slightly higher AUC in both training and hold-out samples. As shown in the plot, the optimized sigma in red is right next to the best sigma in the random search.

df <- readRDS("df.rds") | |

source("mob.R") | |

source("grnnet.R") | |

bin_out <- batch_bin(df, 3) | |

df_woe <- batch_woe(df, bin_out$BinLst) | |

Y <- df$bad | |

X <- scale(df_woe$df[, –1]) | |

set.seed(2019) | |

i <- sample(seq(length(Y)), length(Y) / 4) | |

Y1 <- Y[i] | |

Y2 <- Y[–i] | |

X1 <- X[i, ] | |

X2 <- X[–i, ] | |

net1 <- grnn.fit(x = X1, y = Y1) | |

rst1 <- grnn.optmiz_auc(net1, lower = 1, upper = 3, nfolds = 3) | |

# sigma auc | |

# 2.267056 0.7610545 | |

S <- gen_latin(min = 1, max = 3, n = 20) | |

rst2 <- grnn.search_auc(net1, sigmas = S, nfolds = 3) | |

# sigma auc | |

# 2.249354 0.7609994 | |

MLmetrics::AUC(y_pred = grnn.predict(grnn.fit(x = X1, y = Y1, sigma = rst1$sigma), X2), y_true = Y2) | |

# 0.7458775 | |

MLmetrics::AUC(y_pred = grnn.predict(grnn.fit(x = X1, y = Y1, sigma = rst2$best$sigma), X2), y_true = Y2) | |

# 0.7458687 |