Some interesting insights from the interview with Jerome Pesenti
Dec 10, 2019
- “[…] as responsible researchers we should continue to consider the risks of potential misapplications and how we can help to mitigate those, while still ensuring that our work advancing AI is as open and reproducible as possible.”
- On the limitations of Deep Learning: “It can propagate human biases, it’s not easy to explain, it doesn’t have common sense, it’s more on the level of pattern matching than robust semantic understanding.” — Even though they are working on addressing these concerns, they are still very valid today.
- The “wall” in the title is related to the idea that we need more and more computing power for our experiments. I think this is partly because these experiments try to solve everything with a deep learning solution. Is gradient descent really our best solution for solving AI problems?
(Also posted on
my LinkedIn feed)