There’s an interesting cat-and-mouse game going on between the new wave of AI tools and humans. The new tools can create almost realistic renderings in text, images, audio. As humans, we are constantly searching for ways to detect whether something is created by / with an AI, or by a human. People are even building tools to help us do that.
In AI, there’s this concept called a GAN: A Generative Adversarial Network. Here, two neural networks compete against one another: one tries to generate something as realistically as possible, while the other tries to detect if something was produced by a human or an AI. An improvement in one, means an improvement in the other is necessary as well, and so both networks are lifted to a higher level.
We see something happening on a larger scale now as well. Interesting initiatives pop up. For text alone, there are GPT Zero, OpenAI’s Text classifier, and Writer’s AI Content detector.
However, instead of making texts more realistic, OpenAI is thinking about adding a watermark to their GPT models. This means that tools to detect if an article or report was generated with AI will become much more powerful. One way of doing this kind of watermarking is described in this paper by Kirchenbauer et al.
Will we see new tools popping up that offer better evasion of detection algorithms? Let me know what you think in the comments!
(Also posted on my LinkedIn feed)