Training the latest AI systems requires amazing computing resources, which means it’s hard for shy academic labs to catch up with wealthy technology companies. But a new approach could allow scientists to train the ai on a single computer. OpenAI 2018 reports doubling the processing power required to train the most powerful AIs every 3.4 months, with deep learning being particularly demanding.
Now, researchers at the University of Southern California and Intel Labs have found a way to train deep-intensive learning on high-end computers used in the lab.
Their paper (preprints) was published at the International Conference on Machine Learning last week. Demand is the mother of invention, and lead author Aleksei Petrenko is a graduate student in Southern California and an intern at Intel.
At the end of his internship, he was unable to access the chip giant’s supercomputer and continued his unfinished deep-fortified learning program. So he and his colleagues decided to find a way to continue working on a simple system.