I can calculate the motion of heavenly bodies but not the madness of people. -Isaac Newton

## Double Poisson Regression in SAS

In the previous post (https://statcompute.wordpress.com/2016/11/27/more-about-flexible-frequency-models), I’ve shown how to estimate the double Poisson (DP) regression in R with the gamlss package. The hurdle of estimating DP regression is the calculation of a normalizing constant in the DP density function, which can be calculated either by the sum of an infinite series or by a closed form approximation. In the example below, I will show how to estimate DP regression in SAS with the GLIMMIX procedure.

First of all, I will show how to estimate DP regression by using the exact DP density function. In this case, we will approximate the normalizing constant by computing a partial sum of the infinite series, as highlighted below.

```data poi;
do n = 1 to 5000;
x1 = ranuni(1);
x2 = ranuni(2);
x3 = ranuni(3);
y = ranpoi(4, exp(1 * x1 - 2 * x2 + 3 * x3));
output;
end;
run;

proc glimmix data = poi;
nloptions tech = quanew update = bfgs maxiter = 1000;
model y = x1 x2 x3 / link = log solution;
theta = exp(_phi_);
_variance_ = _mu_ / theta;
p_u = (exp(-_mu_) * (_mu_ ** y) / fact(y)) ** theta;
p_y = (exp(-y) * (y ** y) / fact(y)) ** (1 - theta);
f = (theta ** 0.5) * ((exp(-_mu_)) ** theta);
do i = 1 to 100;
f = f + (theta ** 0.5) * ((exp(-i) * (i ** i) / fact(i)) ** (1 - theta)) * ((exp(-_mu_) * (_mu_ ** i) / fact(i)) ** theta);
end;
k = 1 / f;
prob = k * (theta ** 0.5) * p_y * p_u;
if log(prob) ~= . then _logl_ = log(prob);
run;
```

Next, I will show the same estimation routine by using the closed form approximation.

```proc glimmix data = poi;
nloptions tech = quanew update = bfgs maxiter = 1000;
model y = x1 x2 x3 / link = log solution;
theta = exp(_phi_);
_variance_ = _mu_ / theta;
p_u = (exp(-_mu_) * (_mu_ ** y) / fact(y)) ** theta;
p_y = (exp(-y) * (y ** y) / fact(y)) ** (1 - theta);
k = 1 / (1 + (1 - theta) / (12 * theta * _mu_) * (1 + 1 / (theta * _mu_)));
prob = k * (theta ** 0.5) * p_y * p_u;
if log(prob) ~= . then _logl_ = log(prob);
run;
```

While the first approach is more accurate by closely following the DP density function, the second approach is more efficient with a significantly lower computing cost. However, both are much faster than the corresponding R function gamlss().

Written by statcompute

April 20, 2017 at 12:48 am

## SAS Macro Calculating Goodness-of-Fit Statistics for Quantile Regression

As shown by Fu and Wu in their presentation (https://www.casact.org/education/rpm/2010/handouts/CL1-Fu.pdf), the quantile regression is an appealing approach to model severity measures with high volatilities due to its statistical characteristics, including the robustness to extreme values and no distributional assumptions. Curti and Migueis also pointed out in a research paper (https://www.federalreserve.gov/econresdata/feds/2016/files/2016002r1pap.pdf) that the operational loss is more sensitive to macro-economic drivers at the tail, making the quantile regression an ideal model to capture such relationships.

While the quantile regression can be conveniently estimated in SAS with the QUANTREG procedure, the standard SAS output doesn’t provide goodness-of-fit (GoF) statistics. More importantly, it is noted that the underlying rationale of calculating GoF in a quantile regression is very different from the ones employed in OLS or GLM regressions. For instance, the most popular R-square is not applicable in the quantile regression anymore. Instead, a statistic called “R1” should be used. In addition, AIC and BIC are also defined differently in the quantile regression.

Below is a SAS macro showing how to calculate GoF statistics, including R1 and various information criterion, for a quantile regression.

```%macro quant_gof(data = , y = , x = , tau = 0.5);
***********************************************************;
* THE MACRO CALCULATES GOODNESS-OF-FIT STATISTICS FOR     *;
* QUANTILE REGRESSION                                     *;
* ------------------------------------------------------- *;
* REFERENCE:                                              *;
*  GOODNESS OF FIT AND RELATED INFERENCE PROCESSES FOR    *;
*  QUANTILE REGRESSION, KOENKER AND MACHADO, 1999         *;
***********************************************************;

options nodate nocenter;
title;

* UNRESTRICTED QUANTILE REGRESSION *;
ods select ParameterEstimates ObjFunction;
ods output ParameterEstimates = _est;
proc quantreg data = &data ci = resampling(nrep = 500);
model &y = &x / quantile = &tau nosummary nodiag seed = 1;
output out = _full p = _p;
run;

* RESTRICTED QUANTILE REGRESSION *;
ods select none;
proc quantreg data = &data ci = none;
model &y = / quantile = &tau nosummary nodiag;
output out = _null p = _p;
run;
ods select all;

proc sql noprint;
select sum(df) into :p from _est;
quit;

proc iml;
use _full;
read all var {&y _p} into A;
close _full;

use _null;
read all var {&y _p} into B;
close _null;

* DEFINE A FUNCTION CALCULATING THE SUM OF ABSOLUTE DEVIATIONS *;
start loss(x);
r = x[, 1] - x[, 2];
z = j(nrow(r), 1, 0);
l = sum(&tau * (r <> z) + (1 - &tau) * (-r <> z));
return(l);
finish;

r1 = 1 - loss(A) / loss(B);
adj_r1 = 1 - ((nrow(A) - 1) * loss(A)) / ((nrow(A) - &p) * loss(B));
aic = 2 * nrow(A) * log(loss(A) / nrow(A)) + 2 * &p;
aicc = 2 * nrow(A) * log(loss(A) / nrow(A)) + 2 * &p * nrow(A) / (nrow(A) - &p - 1);
bic = 2 * nrow(A) * log(loss(A) / nrow(A)) + &p * log(nrow(A));

l = {"R1" "ADJUSTED R1" "AIC" "AICC" "BIC"};
v = r1 // adj_r1 // aic // aicc // bic;
print v[rowname = l format = 20.8 label = "Fit Statistics"];
quit;

%mend quant_gof;
```

Written by statcompute

April 15, 2017 at 8:24 pm

## Random Search for Optimal Parameters

Practices of manual search, grid search, or the combination of both have been successfully employed in the machine learning to optimize hyper-parameters. However, in the arena of deep learning, both approaches might become impractical. For instance, the computing cost of grid search for hyper-parameters in a multi-layer deep neural network (DNN) could be prohibitively high.

In light of aforementioned hurdles, Bergstra and Bengio proposed a novel idea of random search in the paper http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf. In their study, it was found that random search is more efficient than grid search for the hyper-parameter optimization in terms of computing costs.

In the example below, it is shown that both grid search and random search have reached similar results in the SVM parameter optimization based on cross-validations.

```import pandas as pd
import numpy as np
from sklearn import preprocessing
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from sklearn.svm import SVC as svc
from sklearn.metrics import make_scorer, roc_auc_score
from scipy import stats

# DATA PREPARATION
y = df[df.CARDHLDR == 1].DEFAULT.values
x = preprocessing.scale(df[df.CARDHLDR == 1].ix[:, 2:12], axis = 0)

# DEFINE MODEL AND PERFORMANCE MEASURE
mdl = svc(probability = True, random_state = 1)
auc = make_scorer(roc_auc_score)

# GRID SEARCH FOR 20 COMBINATIONS OF PARAMETERS
grid_list = {"C": np.arange(2, 10, 2),
"gamma": np.arange(0.1, 1, 0.2)}

grid_search = GridSearchCV(mdl, param_grid = grid_list, n_jobs = 4, cv = 3, scoring = auc)
grid_search.fit(x, y)
grid_search.cv_results_

# RANDOM SEARCH FOR 20 COMBINATIONS OF PARAMETERS
rand_list = {"C": stats.uniform(2, 10),
"gamma": stats.uniform(0.1, 1)}

rand_search = RandomizedSearchCV(mdl, param_distributions = rand_list, n_iter = 20, n_jobs = 4, cv = 3, random_state = 2017, scoring = auc)
rand_search.fit(x, y)
rand_search.cv_results_
```

Written by statcompute

April 10, 2017 at 12:07 am

Tagged with

## A Simple Convolutional Neural Network for The Binary Outcome

Since CNN(Convolutional Neural Networks) have achieved a tremendous success in various challenging applications, e.g. image or digit recognitions, one might wonder how to employ CNNs in classification problems with binary outcomes.

Below is an example showing how to use a simple 1D convolutional neural network to predict credit card defaults.

```### LOAD PACKAGES
from numpy.random import seed
from sklearn.preprocessing import minmax_scale
from keras.layers.convolutional import Conv1D, MaxPooling1D
from keras.optimizers import SGD
from keras.models import Sequential
from keras.layers import Dense, Flatten

### PREPARE THE DATA
Y = df[df.CARDHLDR == 1].DEFAULT
X = minmax_scale(df[df.CARDHLDR == 1].ix[:, 2:12], axis = 0)
y_train = Y.values
x_train = X.reshape(X.shape[0], X.shape[1], 1)

### FIT A 1D CONVOLUTIONAL NEURAL NETWORK
seed(2017)
conv = Sequential()
conv.add(Conv1D(20, 4, input_shape = x_train.shape[1:3], activation = 'relu'))
sgd = SGD(lr = 0.1, momentum = 0.9, decay = 0, nesterov = False)
conv.compile(loss = 'binary_crossentropy', optimizer = sgd, metrics = ['accuracy'])
conv.fit(x_train, y_train, batch_size = 500, epochs = 100, verbose = 0)
```

Considering that 1D is the special case of 2D, we can also solve the same problem with a 2D convolutional neural network by changing the input shape, as shown below.

```from numpy.random import seed
from sklearn.preprocessing import minmax_scale
from keras_diagram import ascii
from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras.optimizers import SGD
from keras.models import Sequential
from keras.layers import Dense, Flatten

Y = df[df.CARDHLDR == 1].DEFAULT
X = minmax_scale(df[df.CARDHLDR == 1].ix[:, 2:12], axis = 0)
y_train = Y.values
x_train = X.reshape(X.shape[0], 1, X.shape[1], 1)

seed(2017)
conv = Sequential()
conv.add(Conv2D(20, (1, 4), input_shape = x_train.shape[1:4], activation = 'relu'))