A Unifying Theory of Learning: DL Meets Kernel Methods
I. de Zarzà
Broschiertes Buch

A Unifying Theory of Learning: DL Meets Kernel Methods

ETH Zürich

Versandfertig in 6-10 Tagen
29,99 €
inkl. MwSt.
PAYBACK Punkte
15 °P sammeln!
We introduce a framework to use kernel approximates in the mini-batch setting with Stochastic Gradient Descent (SGD) as an alternative to Deep Learning. Based on Random Kitchen Sinks, we provide a C++ library for Large-scale ML. It contains a CPU optimized implementation of the algorithm in Le et al. 2013, that allows the computation of approximated kernel expansions in log-linear time. The algorithm requires to compute the product of matrices Walsh Hadamard. A cache friendly Fast Walsh Hadamard that achieves compelling speed and outperforms current state-of-the-art methods has been developed....