JP van Oosten

Why ChatGPT makes me uncomfortable

Jan 10, 2023

Brain behind a gate, created with Stable Diffusion Let me confess something: I have been hesitant to try out ChatGPT. I’ve read a lot about it (it’s hard to miss!) and I really admire the way some people are building new kinds of applications on top of it. But, there’s a nagging feeling that I’d love to explore a bit in this post. I’m curious about your points of view, so let’s discuss. Feel free to comment on the LinkedIn post or send me a private message there 💬 if you want to dig deeper into these topics.

↔️
We want AI to help us, not harm us. Sci-fi movies and books are full of examples of AI going rogue, Terminator-style 🦾. The research area on this topic is called alignment, studying how AI is aligned with human values. A misaligned AI can lead to greater inequality, exclusion of minorities, or even the extinction of human life (according to some longtermism researchers). In this post, I want to focus on one area in which to improve alignment: The democratization of AI, by including a larger and more diverse group of people in the design of AI models.

🧠
ChatGPT is an amazingly complex piece of engineering. It’s trained on a very large amount of text, scraped from the internet. There are only a handful of organisations capable of using this much compute. This creates a playing field that’s anything but level. It also means that these private organisations determine what ChatGPT can and cannot say. Currently, it’s up to the community to figure out what kind of biases ChatGPT has. What does it understand of the world? What types of text were and weren’t included in the training data? I don’t think that this should be left up to the community alone to figure out. There should be more of a dialogue between the owner of the model and the users.

🌸
How to balance the need for innovation and alignment? You want to be free to create new models and ideas, but you also want to understand what such ideas can do. What kind of data is being used to train a model? What does that mean for any output it creates? This is why I found the Big Science collaboration (with HuggingFace, GENCI and IDRIS) an interesting take. In this collaboration the participants trained a large language model called BLOOM. They asked a community of a thousand researchers what they wanted to understand of large language models, and made it an explicit goal to include diverse sets of data from different regions, contexts and audiences. This makes the process more democratic, inclusive and transparent.

Having more of such collaborations would change the discourse to some degree. But what does that mean for the use of ChatGPT and similar big models? Should entrepreneurs and others care? And what other forms make sense for a more democratic and inclusive process for building AI models, and how can they still be profitable for the companies that run them? As I mentioned in the intro, I’m interested in your thoughts!

(Also posted on my LinkedIn feed)