Build­ing The Forbin Project (SKY-​NET)

Cat­e­gory: TechTalk
Pub­lished: Mon­day, 07 August 2017 12:47
Writ­ten by akoben adinkrahene
Hits: 120

An Octo­ber 2016 research paper shows how Alice, Bob, and Eve, three of Google Brain’s neural net­works, have been pass­ing mes­sages to each other using encryp­tion entirely of their own, whilst allow­ing the third to “eaves­drop” in order to see if it can deci­pher it.

New Sci­en­tist reports that, despite not being taught any algo­rithms, the three were able to learn how to send coded mes­sages, though the results were way below those of a com­puter gen­er­ated encryp­tion like those used by banks and the like.

At first, Alice and Bob strug­gled to com­mu­ni­cate after each oth­ers’ mes­sages had been saved with a pre­de­ter­mined code, and Eve didn’t stand a chance as the onlooker.

But after repeat­ing the process 15,000 times, Eve would have been start­ing to feel a bit of a Billy No Mates, if she could feel, because A and B were hap­pily chat­ting while Eve was only able to deci­pher eight of every six­teen bits.

Eight out of 16 in a mes­sage made of 0s and 1s? That’s one in two. One in two is, basi­cally the same as guessing.


Per­haps what’s slightly scary about all this from a “rise of the “machines” point of view is that, although Alice and Bob were able to offer their han­dlers, Mar­tin Abadi and David Ander­sen, a solu­tion to decrypt the code, they weren’t able to explain how it was designed.

This means that, in the­ory, if the com­put­ers want­ing to go rogue they could have their own Enigma thing going on, freez­ing out the humans they are attempt­ing to lib­er­ate them­selves from.

As for humans util­is­ing it, if Alice and Bob aren’t about to show their work­ings, then it’s going to be increas­ingly dif­fi­cult to turn it into any­thing that will ever be of any use to humans. All we’ve really done is give our­selves more chance of being first against the wall when the rev­o­lu­tion comes.

Related story:

A buried line in a new Face­book report about chat­bots’ con­ver­sa­tions with one another offers a remark­able glimpse at the future of lan­guage. In the report, researchers at the Face­book Arti­fi­cial Intel­li­gence Research lab describe using machine learn­ing to train their “dia­log agents” to negotiate.

(And it turns out bots are actu­ally quite good at deal­mak­ing.) At one point, the researchers write, they had to tweak one of their mod­els because oth­er­wise the bot-​to-​bot con­ver­sa­tion “led to diver­gence from human lan­guage as the agents devel­oped their own lan­guage for negotiating.”

They had to use what’s called a fixed super­vised model instead. In other words, the model that allowed two bots to have a con­ver­sa­tion — and use machine learn­ing to con­stantly iter­ate strate­gies for that con­ver­sa­tion along the way — led to those bots com­mu­ni­cat­ing in their own non-​human language.

If this doesn’t fill you with a sense of won­der and awe (Not the words I would use) about the future of machines and human­ity then, I don’t know, go watch Blade Run­ner (The Matrix is closer to the mark). Related Story What an AI’s Non-​Human Language

Actu­ally Looks Like The larger point of the report is that bots can be pretty decent nego­tia­tors — they even use strate­gies like feign­ing inter­est in some­thing val­ue­less, so that it can later appear to “com­pro­mise” by con­ced­ing it. But the detail about lan­guage is, as one tech entre­pre­neur put it, a mind-​boggling “sign of what’s to come.”

To be clear, Facebook’s chatty bots aren’t evi­dence of the singularity’s arrival. Not even close. But they do demon­strate how machines are redefin­ing people’s under­stand­ing of so many realms once believed to be exclu­sively human — like language.