Winning the Lottery with Continuous Sparsification
Pedro Savarese Hugo Silva Michael Maire
TTI-Chicago University of Alberta University of Chicago
savarese@ttic.edu hugoluis@ualberta.ca mmaire@uchicago.edu
Abstract
The search for efficient, sparse deep neural network models is most prominently
performed by pruning: training a dense, overparameterized network and removing
parameters, us ...
附件列表