This blog post is part of a series that describes my summer school project at CERN openlab.
See how an improvement to the L-BFGS algorithm takes advantage of progressive batching, line search, and quasi-Newton updating.
This paper introduces a new neural network architecture and shows how improvements to a Cycle-GAN significantly reduces training time for translating images across domains.
Learn more about an award-wining method that uses noise unnoticeable to humans to confuse neural networks into making enormous mistakes.
A new test shows that some of the best state-of-the-art natural language models fail by changing even just a few words.
This research paper introduces DSOD, a framework that outperforms modern variations of regional proposal and classification networks.
This research shows how training low-bit deep neural networks has a much smaller memory footprint than full-precision models but with a small cost to accuracy.