A.I. (with­out human inter­ven­tion)

If you had been fol­low­ing this site for any amount of years or from the begin­ning then you know there has always been a sec­tion for those things of a tech­ni­cal nature which we get lit­tle infor­ma­tion on. As a pre­view of the story you are about to read form Zero­hedge I think it would be a good idea to review some sto­ries I have posted in the past and shortly after my site was hacked. I’m not say­ing some­one was attempt­ing to sup­press the story to every­one! No by all means, just to some of us.

The one that tops the list for me is Rat brain!

I posted this story more than 10 years ago where do you think this research is now?

A Uni­ver­sity of Florida sci­en­tist has cre­ated a liv­ing “brain” of cul­tured rat cells that now con­trols an F-​22 fighter jet flight sim­u­la­tor. An elec­trode grid was placed at the bot­tom of a glass dish and then cov­ered with rat neu­rons. This grad­u­ally formed a neural net­work — a brain. The research could lead to tiny, brain-​controlled pros­thetic devices and/​or unmanned air­planes flown by liv­ing com­put­ers. (Unmanned air­planes flown by liv­ing computers.)

Check the chill­ing con­nec­tion to the Movie Matrix



And there is no need for it to be a rats brain, remem­ber (this was also poster then hacked)

Sci­en­tists Cre­ate First Syn­thetic Cell

This has to be about a 5 year old story!

You can find the lat­est sto­ries I’ve poster that have not yet been touch under Tech Talk. It not much now but it could get you to research for yourself.

From Zero­hedge

Start­ing a mil­i­tary AI arms race is a bad idea,” an open let­ter pre­sented at the Inter­na­tional Joint Con­fer­ence on Arti­fi­cial Intel­li­gence in Buenos Aries and signed by such lumi­nar­ies as Stephen Hawk­ing and Elon Musk says.

The let­ter looks to be an effort to dis­suade gov­ern­ments from devel­op­ing weapons sys­tems with offen­sive capa­bil­i­ties that can oper­ate “with­out human inter­ven­tion.” The plea comes as the world begins to ask tough ques­tions about AI amid fright­en­ing por­tray­als in cin­ema (Ex Machina) and real world (if rudi­men­tary) efforts to build killer robots rem­i­nis­cent of the T-​1 terminator.

Although the let­ter notes that the world’s top AI researchers have no inter­est in unleash­ing Skynet, that’s pre­cisely what will hap­pen once an enter­pris­ing nation kicks off the “inevitable” AI arms race, experts say. Once these weapons find their way to the bat­tle­field, the let­ter con­tin­ues, it would then be only a mat­ter of time before they hit the black mar­ket, at which point they would wind up in the hands of ter­ror­ists and all sorts of other unsa­vory indi­vid­u­als includ­ing dic­ta­tors and tribal warlords.

Ulti­mately, the note is a call to action and sug­gests a ban on “offen­sive autonomous weapons.”

The authors cite sim­i­lar inter­na­tional agree­ments on chem­i­cal and bio­log­i­cal weapons, space nukes, and lasers that blind peo­ple. But for every­one out there who enjoys a good drone strike deba­cle every now and again, don’t worry because the sig­na­to­ries (which also include Steve Woz­niak and Noam Chom­sky) are just fine with real-​life Car­rie Math­isons incin­er­at­ing “ter­ror­ists” from the stratos­phere (just for­get about the occa­sional col­lat­eral damage).

* * *

Musk, Hawk­ing warn of ‘inevitable’ killer robot arms race

Full let­ter

Autonomous Weapons: an Open Let­ter from AI & Robot­ics Researchers

Autonomous weapons select and engage tar­gets with­out human inter­ven­tion. They might include, for exam­ple, armed quad­copters that can search for and elim­i­nate peo­ple meet­ing cer­tain pre-​defined cri­te­ria, but do not include cruise mis­siles or remotely piloted drones for which humans make all tar­get­ing deci­sions. Arti­fi­cial Intel­li­gence (AI) tech­nol­ogy has reached a point where the deploy­ment of such sys­tems is — prac­ti­cally if not legally — fea­si­ble within years, not decades, and the stakes are high: autonomous weapons have been described as the third rev­o­lu­tion in war­fare, after gun­pow­der and nuclear arms.

Many argu­ments have been made for and against autonomous weapons, for exam­ple that replac­ing human sol­diers by machines is good by reduc­ing casu­al­ties for the owner but bad by thereby low­er­ing the thresh­old for going to bat­tle. The key ques­tion for human­ity today is whether to start a global AI arms race or to pre­vent it from start­ing. If any major mil­i­tary power pushes ahead with AI weapon devel­op­ment, a global arms race is vir­tu­ally inevitable, and the end­point of this tech­no­log­i­cal tra­jec­tory is obvi­ous: autonomous weapons will become the Kalash­nikovs of tomor­row. Unlike nuclear weapons, they require no costly or hard-​to-​obtain raw mate­ri­als, so they will become ubiq­ui­tous and cheap for all sig­nif­i­cant mil­i­tary pow­ers to mass-​produce. It will only be a mat­ter of time until they appear on the black mar­ket and in the hands of ter­ror­ists, dic­ta­tors wish­ing to bet­ter con­trol their pop­u­lace, war­lords wish­ing to per­pe­trate eth­nic cleans­ing, etc. Autonomous weapons are ideal for tasks such as assas­si­na­tions, desta­bi­liz­ing nations, sub­du­ing pop­u­la­tions and selec­tively killing a par­tic­u­lar eth­nic group. We there­fore believe that a mil­i­tary AI arms race would not be ben­e­fi­cial for human­ity. There are many ways in which AI can make bat­tle­fields safer for humans, espe­cially civil­ians, with­out cre­at­ing new tools for killing people.

Just as most chemists and biol­o­gists have no inter­est in build­ing chem­i­cal or bio­log­i­cal weapons, most AI researchers have no inter­est in build­ing AI weapons — and do not want oth­ers to tar­nish their field by doing so, poten­tially cre­at­ing a major pub­lic back­lash against AI that cur­tails its future soci­etal ben­e­fits. Indeed, chemists and biol­o­gists have broadly sup­ported inter­na­tional agree­ments that have suc­cess­fully pro­hib­ited chem­i­cal and bio­log­i­cal weapons, just as most physi­cists sup­ported the treaties ban­ning space-​based nuclear weapons and blind­ing laser weapons.

In sum­mary, we believe that AI has great poten­tial to ben­e­fit human­ity in many ways, and that the goal of the field should be to do so. Start­ing a mil­i­tary AI arms race is a bad idea, and should be pre­vented by a ban on offen­sive autonomous weapons beyond mean­ing­ful human control.

related: Gun-​toting drone is prob­a­bly legal, police confirm

joomla tem­platesfree joomla tem­platestem­plate joomla
2017 Genet­icMem­ory glob­bers joomla tem­plate