Using large-scale brain simulations for machine learning and AI: Neural networks are very computationally costly, so to date, most networks used in machine learning have used only 1 to 10 million connections. “But we suspected that by training much larger networks, we might achieve significantly better accuracy,” said the Google team.
“So we developed a distributed computing infrastructure for training large-scale neural networks. Then, we took an artificial neural network and spread the computation across 16,000 of our CPU cores (in our data centers), and trained models with more than 1 billion connections.”
“We then ran experiments that asked, informally: If we think of our neural network as simulating a very small-scale ‘newborn brain,’ and show it YouTube video for a week, what will it learn? Our hypothesis was that it would learn to recognize common objects in those videos.
“Indeed, to our amusement, one of our artificial neurons learned to respond strongly to pictures of… cats. Remember that this network had never been told what a cat was, nor was it given even a single image labeled as a cat.
No comments:
Post a Comment