Recently it was discovered that the artificial intelligence experts of The Google’s Brain project were able to create A.I. that is capable of developing its own secure secret language or cryptography.

The accomplishment is leaving some with more questions than answers currently available.

It seems that nobody at Google has any idea how this extra layer of encryption actually works, not even the A.I.

In an experiment, the A.I. networks were created, two were instructed to communicate securely with each other using a shared secret key, while the third was assigned the task of intercepting and decrypting those communications.

The result, the two communicating networks were able to develop their own crypto layer that the third network was unsuccessful at cracking.

Importantly, the AIs were not told how to encrypt stuff, or what crypto techniques to use: they were just given a loss function (a failure condition), and then they got on with it. In Eve’s case, the loss function was very simple: the distance, measured in correct and incorrect bits, between Alice’s original input plaintext and its guess. For Alice and Bob the loss function was a bit more complex: if Bob’s guess (again measured in bits) was too far from the original input plaintext, it was a loss; for Alice, if Eve’s guesses are better than random guessing, it’s a loss. And thus an adversarial generative network (GAN) was create

via Google AI invents its own cryptographic algorithm; no one knows how it works | Ars Technica UK

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s