Friday, September 05, 2014

Next Generation Robots and Artificial Intelligence

By Douglas V. Gibbs

Advancements in gaming systems, computers, microchips, and robotics have taken the world by pleasant surprise.  Mechanical devices designed to make life a whole lot easier are emerging to the point that I am about ready to declare English as my second language. . . since we text, tweet and post so much with texting abbreviations and chat acronyms.

BION.

The generations that follow my post-baby boom group of forty-somethings don't even know life without technology.  It's in movies, games, and everyday applications.  They don't remember life before computers, microwaves, cable television, or internet porn.  Life has been electronically handed to them, with every convenience in the world at their fingertips.  "I don't know" is no longer an acceptable response.  Look it up.  You have Google in your pocket.

However, there have been consequences.

Hollywood's fun with technology has created some eye-popping stuff. . . but the storytelling has suffered greatly.

Games are graphic and almost scary-real to play. . . but the basketball courts and public parks are now empty.  Where are the kids that used to ride their bikes up and down the street, with cards in the spokes to make a noise, and bandages on their elbows and knees because we didn't wear helmets, or pads.

The kids of today are afraid of their own shadow.  Government has convinced them that everything is dangerous.  They must be protected from the outdoors, not just in terms of activities, but radiation from the sun, global warming, global cooling, guns, kidnappers in vans, bugs, disease, a little dirt, and a lot of pollution.  Hell, their parents can't even be trusted.  The current administration has repeatedly told the youngsters to monitor their parents for dangerous speech and Bible-Thumping induced phobias.

No wonder they play games all day inside the house, while both parents are at work (or in the bread line).  The liberal left has painted a pretty frightening world to live in.

Technology, however, is your friend.  They live with their phones, tablets, and whatever else is coming down the pike.  Leaving your device at home is like walking out the door naked, and if you are a member of the younger generations, that very thought is probably followed by a shrill shriek of horror.

I am not so convinced that all of this technology is such a good thing.

I hope you don't mind if I play this technology revolution thing in a manner as would Detective Del Spooner (Will Smith's character in "I, Robot"), viewing the advance of technology with a skeptical eye. . . not skeptical that we can achieve all we say we can, but skeptical regarding the consequences once we achieve all of the things we think we can.  Perhaps the fact that I grew up watching Cylons hunting down humans, and my young adult years familiar with Sknyet and Terminators, didn't help.

Before Battlestar Galactica, and again after Terminator, I was an avid reader of something called "books" (I am wondering how long it will be before I will have to start describing what those wonderful things were).

Paper?  You mean that tree killing stuff we used to use for reports and forms?

I read books, inhaling the wonderful aroma of the pages, flipping the pages like the wings of pigeons, devouring each word with wonderment and excitement.  And as my reading increased, I mostly dove into the world of Isaac Asimov.  The Russian born writer is my favorite author, upon whose imaginative works films like I, Robot with Will Smith and Robin Williams' Bicentennial Man were based upon.  Asimov recognized the potential dangers of rising technology, thinking machines, and robots, so in his books, written long before the Cylons and Terminators hunted us down in our dystopian world of entertainment (and before I was even born, for that matter), artificial intelligence had a governor, a limiting device, a protocol designed to protect us from runaway machines in the form of the Three Laws of Robotics.

#1 A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

#2 A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

#3 A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

According to I, Robot (the book) those laws appear in the Handbook of Robotics, 56th Edition, 2058.

It looks like we are on target with Isaac Asimov, with today's technology folk expecting us to reach the level of Asimov-style mechanical men by the 2050s.  Surely, Star Trek's "Data" (you now, the Brent Spiner character in Next Generation?) could not be too far behind.

In Asimov's books, after presenting the 3 Laws, he added, ". . . of course there are glitches."

The glitch with our current technology is that we could care less about such safeguards.  Not necessary.  It's just a cute little machine, after all.  What harm could any of this become?

Asimov presented us with a question, in his writings, and this is the one we must ponder: "Who is really in charge; and who should be?"

All we have to do is ask the right questions, and understand the players involved in the game.

SIRI is a voice on your Apple product phones that makes the device almost seem. . . human.  That's the goal, isn't it?  For our technology to eventually allow us to be gods and create beings that can think and reason?

Is artificial intelligence our Tower of Babel?

Jibo is a family robot that is in the development phase as we speak.  It's a little robot with a big personality.  Adorable, sleek, curvy in its design.  It can be a companion, a helper, and fun for the kids.  It's an ideal robot servant.

And as robotic devices, and hand held devices, become more and more efficient in their abilities to aid humans, Google has announced that they are working on a super-fast "quantum" computer chip as part a vision to one day have machines that think like humans.

Because they would never turn against us if we gave them the ability to think like us, or even reason better than us, right?

Consequences still exist as we dig deeper into Felix's Magic Bag of Tricks.

Felix.  He was a black and white cat.  Cartoon.

[sigh]

Never mind. . .

The development of Google's "quantum" computer chip is geared towards creating chips that operate on sub-atomic levels, which would make them exponentially faster than processors currently used in computers,

And then, Jibo could use one of those neat new chips, and Robbie could use one of those neat new chips, and the Cyberdyne Systems Model 101 can have one installed in its shiny skull.  The future looks glorious, doesn't it?

Now for my little twist on this whole thing.  I don't believe artificial intelligence will decide we are a danger to ourselves and so we must be controlled by the likes of VIKI in I, Robot, or the ARIIA in Eagle Eye. I don't even think robots will become killing machines like Cylons, or Terminators.  I think we will become so accustomed to having technology that we will lose our humanity, depend upon machines so much that we will stop doing our own thinking, cease to participate in activities, fail to be properly informed, or act as problem solvers in our culture.  We may very well become bed-ridden like the folks hooked up to their Surrogates in that movie with Bruce Willis, or perhaps become nothing more than batteries like in Matrix.

Societal atrophy.

I welcome the next generation of technology, I am excited that we are becoming the science fiction worlds I read about when I was younger, but I am fearful of how human nature will respond to these advancements.  I fear the worst of who we are will enable us to become slaves to government who will use technology to keep us distracted, so that they can do their bidding in making us dependent upon the state.  Technology will not become the tool of mechanical masters, but of a ruling elite that has been there all along, waiting for the right moment, herding the right distractions into place.

I hope I'm wrong.

-- Political Pistachio Conservative News and Commentary



No comments: