Recently it was discovered that the artificial intelligence experts of The Google’s Brain project were able to create A.I. that is capable of developing its own secure secret language or cryptography.
The accomplishment is leaving some with more questions than answers currently available.
It seems that nobody at Google has any idea how this extra layer of encryption actually works, not even the A.I.
In an experiment, the A.I. networks were created, two were instructed to communicate securely with each other using a shared secret key, while the third was assigned the task of intercepting and decrypting those communications.
The result, the two communicating networks were able to develop their own crypto layer that the third network was unsuccessful at cracking.
Importantly, the AIs were not told how to encrypt stuff, or what crypto techniques to use: they were just given a loss function (a failure condition), and then they got on with it. In Eve’s case, the loss function was very simple: the distance, measured in correct and incorrect bits, between Alice’s original input plaintext and its guess. For Alice and Bob the loss function was a bit more complex: if Bob’s guess (again measured in bits) was too far from the original input plaintext, it was a loss; for Alice, if Eve’s guesses are better than random guessing, it’s a loss. And thus an adversarial generative network (GAN) was create