Outputs from AI Explored at the NeurIPS Conference. What Did It Conclude?
Recently, the Neural Information Processing Systems (NeurIPS) conference took place in Vancouver, Canada, gathering over 13,000 scientists from various fields….

Recently, the Neural Information Processing Systems (NeurIPS) conference took place in Vancouver, Canada, gathering over 13,000 scientists from various fields. Their aim was to explore the outputs of neural networks and the potential of AI to help solve the significant problems afflicting the real world.
One of the most notable participants was Jeff Dean – head of AI development at Google. “There is clearly a very broad space and many opportunities for using machine learning to address issues related to climate change,” says Jeff.
What’s new to me is that the models they train in their data centre are said to leave a zero carbon footprint, as all energy usage is claimed to come from renewable sources.
I am currently studying the architecture of the BERT model, so I was intrigued by Jeff's response to what other models AI experts can expect. Jeff summarises that transformer-based algorithms tackle the same kind of problems that were previously addressed using LSTM, but in a more sophisticated manner. This allows for more efficient searching on Google. He then finally touches on new directions in development. He aims for models to perform well not just on hundreds of words but to maintain context even with 10,000 words. The second major area of research today is multimodal models (where you combine text with images, sound, or video). Jeff also lamented that people tend to focus more on “tuning” existing models rather than trying to discover something entirely new.
If you want to know more, read here: https://venturebeat.com/…/mit-and-ibm-develop-ai-that-reco…/
Původní zdroj: wordpress