Building The Forbin Project (SKY-NET)

An October 2016 research paper shows how Alice, Bob, and Eve, three of Google Brain’s neural networks, have been passing messages to each other using encryption entirely of their own, whilst allowing the third to "eavesdrop" in order to see if it can decipher it.

 

New Scientist reports that, despite not being taught any algorithms, the three were able to learn how to send coded messages, though the results were way below those of a computer generated encryption like those used by banks and the like.

 

At first, Alice and Bob struggled to communicate after each others' messages had been saved with a predetermined code, and Eve didn’t stand a chance as the onlooker.

 

But after repeating the process 15,000 times, Eve would have been starting to feel a bit of a Billy No Mates, if she could feel, because A and B were happily chatting while Eve was only able to decipher eight of every sixteen bits.

 

Eight out of 16 in a message made of 0s and 1s? That's one in two. One in two is, basically the same as guessing.


Perhaps what’s slightly scary about all this from a "rise of the "machines" point of view is that, although Alice and Bob were able to offer their handlers, Martin Abadi and David Andersen, a solution to decrypt the code, they weren’t able to explain how it was designed.
 

This means that, in theory, if the computers wanting to go rogue they could have their own Enigma thing going on, freezing out the humans they are attempting to liberate themselves from.

 

As for humans utilising it, if Alice and Bob aren't about to show their workings, then it's going to be increasingly difficult to turn it into anything that will ever be of any use to humans. All we've really done is give ourselves more chance of being first against the wall when the revolution comes.

 

Related story:

A buried line in a new Facebook report about chatbots’ conversations with one another offers a remarkable glimpse at the future of language. In the report, researchers at the Facebook Artificial Intelligence Research lab describe using machine learning to train their “dialog agents” to negotiate.

(And it turns out bots are actually quite good at dealmaking.) At one point, the researchers write, they had to tweak one of their models because otherwise the bot-to-bot conversation “led to divergence from human language as the agents developed their own language for negotiating.”

They had to use what’s called a fixed supervised model instead. In other words, the model that allowed two bots to have a conversation—and use machine learning to constantly iterate strategies for that conversation along the way—led to those bots communicating in their own non-human language.

If this doesn’t fill you with a sense of wonder and awe (Not the words I would use) about the future of machines and humanity then, I don’t know, go watch Blade Runner (The Matrix is closer to the mark). Related Story What an AI's Non-Human Language

Actually Looks Like The larger point of the report is that bots can be pretty decent negotiators—they even use strategies like feigning interest in something valueless, so that it can later appear to “compromise” by conceding it. But the detail about language is, as one tech entrepreneur put it, a mind-boggling “sign of what’s to come.”

To be clear, Facebook’s chatty bots aren’t evidence of the singularity’s arrival. Not even close. But they do demonstrate how machines are redefining people’s understanding of so many realms once believed to be exclusively human—like language.

Add comment

User Guide is located at here. There you can also find tags for adding media to your comment. "NEW" Or Click the ADD COMMENT link on the main menu to start or join a conversational topic.


Security code
Refresh

joomla templatesfree joomla templatestemplate joomla
2018  GeneticMemory   globbers joomla template