[Home]


L1VSVM

A Demo Matlab code (L1VSVM.mat) for "Classification by estimating the cumulative distribution function for small data". (Click [Here] to download the complete matlab codes.)


Reference

Mengxian Zhu, Yuanhai Shao. Classification by estimating the cumulative distribution function for small data. Submitted, 2022.(Corresponding author: Yuanhai Shao.)


Main Function

function [PredY,model] = epsVL1SVM(TestX,DataTrain,Para)
% _______________________________ Input _______________________________ % Trn.X - m x n matrix, explanatory variables in training data % Trn.Y - m x 1 vector, response variables in training data {0,1} % ValX - mt x n matrix, explanatory variables in Validation data % Para.p1 - the emperical risk parameter C % Para.p2 - the parameter of v-vector kernel % Para.p3 - the eps-insensitive parameter eps % Para.kpar - kernel para, include type and para value of kernel % ______________________________ Output ______________________________ % PredY - mt x 1 vector, predicted response variables for TestX {0,1} % model - model related info: w, b
%% Input
gam = Para.p1; eps = Para.p3; kpar = Para.kpar; X = DataTrain.X; Y = DataTrain.Y; clear DataTrain KerX = kernelfun(X,kpar,X); Para. CDFx; Para.vsig = vsig; Para.v_ker = v_ker; [V,~] = v_vector(X,CDFx,vsig,v_ker); [m,~] = size(X); em = ones(m,1); H = [KerX,-KerX;-KerX,KerX]; H = (H+H')/2; f = [ eps*em-Y ; eps*em+Y ]; Aeq = [em;-em]'; beq=0; lb = zeros(2*m,1); ub = gam*[V;V]; options = optimoptions('quadprog','Display','off'); ALPH = quadprog(H, f, [], [], Aeq, beq, lb, ub, [], options); ALPH(ALPH<1e-8) = 0; idub = ub-ALPH<1e-8; ALPH(idub) = ub(idub); alphs = ALPH(1:m); alpha = ALPH(m+1:end); alph = alpha + alphs; idSV = alph > 0; ida0V = (0 < alpha) & (alpha < V); ids0V = (0 < alphs) & (alphs < V); alphaSV = alpha(idSV); alphsSV = alphs(idSV); Ya = Y(ida0V); Ys = Y(ids0V); Xa = X(ida0V,:); Xs = X(ids0V,:); XSV = X(idSV,:); KerXa = kernelfun( Xa , kpar , XSV ); KerXs = kernelfun( Xs , kpar , XSV ); b1 = Ya - KerXa * (alphsSV-alphaSV) + eps; b2 = Ys - KerXs * (alphsSV-alphaSV) - eps; b = mean([b1;b2]);
%% Prediction&Output
w=(alphs-alpha); [KerTstX] = kernelfun(X,kpar,TestX); F = (w'*KerTstX)'+b; Dec_Val = F - 0.5; PredY = sign(Dec_Val); PredY(PredY==-1) = 0; model.w = (alphs-alpha)'*X; model.b = b;
end
function [V,v] = VvectorABC(X,CDFx,Sigma,v_ker)
% _______________________________ Input _______________________________ % X - m x n matrix, explanatory variables in training data % CDFx - mu(x) in the V-matrix expression, including: 'uniform', 'normal' % v_ker - kernel type in V-matrix, including: 'theta_ker', 'gaussian_ker' % Sigma - v-kernel para of 'gussian_ker' % ______________________________ Output ______________________________ % V- V-vector % v - A cell for each dimensional v-vector
% ______________________________ Test Data _____________________________ % load tes.mat % CDFx = 'uniform'; % v_ker = 'gaussian_ker'; % Sigma = 2^-1; % X = mapminmax(X',0,1)'; [nsamp,nfea] = size(X); %_________________________Uniform Distribution________________________ switch CDFx case 'uniform' V = zeros(nsamp,1); for d = 1:nfea [a,b,~,~] = unifit(X(:,d),0.02); switch v_ker case 'theta_ker' Xd = X(:,d); v{d} = (b-Xd)/(b-a); case 'gaussian_ker' Xd = X(:,d); v{d} =1/(b-a)*(erf((Xd-a)/(sqrt(2)*Sigma))+erf((b-Xd)/(sqrt(2)*Sigma))); end V = V + v{d}; end %__________________________Normal Distribution________________________ case 'normal' alpha = 0.05; V = ones(nsamp,1); for d = 1:nfea [mu,sigma,~,~] = normfit(X(:,d),alpha); switch v_ker case 'theta_ker' Xd = X(:,d); cdf{d} = normcdf(Xd,mu,sigma); v{d} = 1-cdf{d}; case 'gaussian_ker' end V = V + v{d}; end
end
Contacts


Any question or advice please email to Mengxian Zhu(mengxian69@163.com), Yuanhai Shao (shaoyuanhai21@163.com).


  • Latest updated: September 25, 2022.