Imperial College London

ProfessorWayneLuk

Faculty of EngineeringDepartment of Computing

Professor of Computer Engineering
 
 
 
//

Contact

 

+44 (0)20 7594 8313w.luk Website

 
 
//

Location

 

434Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@inproceedings{Shao:2018:10.1109/FPT.2016.7929532,
author = {Shao, S and Mencer, O and Luk, W},
doi = {10.1109/FPT.2016.7929532},
pages = {197--200},
publisher = {IEEE},
title = {Dataflow Design for Optimal Incremental SVM Training},
url = {http://dx.doi.org/10.1109/FPT.2016.7929532},
year = {2018}
}

RIS format (EndNote, RefMan)

TY  - CPAPER
AB - This paper proposes a new parallel architecture for incremental training of a Support Vector Machine (SVM), which produces an optimal solution based on manipulating the Karush-Kuhn-Tucker (KKT) conditions. Compared to batch training methods, our approach avoids re-training from scratch when training dataset changes. The proposed architecture is the first to adopt an efficient dataflow organisation. The main novelty is a parametric description of the parallel dataflow architecture, which deploys customisable arithmetic units for dense linear algebraic operations involved in updating the KKT conditions. The proposed architecture targets on-line SVM training applications. Experimental evaluation with real world financial data shows that our architecture implemented on Stratix-V FPGA achieved significant speedup against LIBSVM on Core i7-4770 CPU.
AU - Shao,S
AU - Mencer,O
AU - Luk,W
DO - 10.1109/FPT.2016.7929532
EP - 200
PB - IEEE
PY - 2018///
SP - 197
TI - Dataflow Design for Optimal Incremental SVM Training
UR - http://dx.doi.org/10.1109/FPT.2016.7929532
UR - http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000402988900029&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=1ba7043ffcc86c417c072aa74d649202
UR - http://hdl.handle.net/10044/1/56417
ER -