This article goes into a bit of the history of the ImageNet dataset, used by so many deep learning projects. The dataset is created by humans, based on data that is created by humans, and so on. So, naturally, there are biases in the dataset, and therefore in the projects that use it.
I think that we should not consider (most) machine learning models as "trained and finished", but as a works in progress. Keep training them, update the models and learn from the "messy" world that they operate in. Build in feedback loops and keep the human in the loop.
(Also posted on my LinkedIn feed)