GPU Speed Test
Do you know why it is only now possible to create truly deep artificial intelligence? Perhaps because we have better algorithms. I tried comparing how quickly I can multiply…

Do you know why it is only now possible to create truly deep artificial intelligence? Perhaps because we have better algorithms.
I tried to compare how quickly I can multiply 10 million random numbers with another 10 million random numbers in Python. First, I did it the old-fashioned way using loops (for) – on the CPU in my laptop, it took 4.9 seconds – not bad, I thought. On the beefed-up graphics card of my desktop, it took 3.7 seconds – fantastic. But what if I tried using tensors (in this case, ordinary matrices) instead of loops? On the laptop's CPU, it took only 9 ms, and on the GPU, it was even just 6 ms! Quite an impressive improvement.
Using tensors thus speeds up the network by 500x on the CPU and 625x on the GPU :)
Do you know why it is only now possible to create truly deep artificial intelligence? Perhaps because we have better algorithms. I tried…
Published by Artificial Intelligence on 15 October 2017
Původní zdroj: wordpress
Související články
January 2018
The Conclusion of the Trilogy on Autoencoders or Genesis 2.0 – Right up to the Creation of the First Humans
ReadAugust 2018
A Message for AI Developers: Recently, I needed to update some libraries in Python, and in the process, TensorFlow stopped working through the GPU.
ReadJanuary 2018