Translate

Most of what I've written has been published as e-books and is available at Amazon. Match Play is a golf/suspense novel. Dust of Autumn is a bloody one set in upstate New York. Prairie View is set in South Dakota, with a final scene atop Rattlesnake Butte. Life in the Arbor is a children's book about Rollie Rabbit and his friends (on about a fourth grade level). The Black Widow involves an elaborate extortion scheme. Happy Valley is set in a retirement community. Doggy-Dog World is my memoir. And ES3 is a description of my method for examining English sentence structure.
In case anyone is interested in any of my past posts, an archive list can be found at the bottom of this page. I'd appreciate any feedback you may have by sending me an e-mail note--jertrav33@aol.com. Thanks for your interest.

Tuesday, October 11

Artificial Intelligence

On 60 Minutes last Sunday, we watched an exciting/frightening segment on AI, or “artificial intelligence” for those not already familiar with the acronym AI. It was exciting in showing us the technological possibilities that lie in our futures, our very near futures, and frightening in the potential malevolence of man versus machine. What happens when machines become so much more intelligent than man that they might not feel any need for mankind? It’s an idea that’s been considered for a long time, both philosophically and science fictionally. Descartes considered it almost four hundred years ago, and Isaac Asimov in his I Robot series almost seventy years ago devised his Three Laws of Robotics: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by a human being except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Intelligence is one thing, but Man has emotions or feelings that are unrelated to intelligence, that make him human. Can robots be created that have, in addition to their intelligence or knowledge, an understanding of emotions? Even more than understanding, can they actually have those feelings or emotions? And if so, won’t they be almost godlike? We already have computers that by a human being can't be defeated at chess. We already have virtual assistants on our phones, tablets, and computers who (Note the humanizing pronoun?) speak to us, answer our questions, do our bidding. We have Siri, Cortana, Alexa, Google, and Facebook M. Raj on The Big Bang Theory admitted to falling in love with his phone assistant Siri. In the film Her, Joaquin Phoenix fell in love with his phone operating system (And who wouldn’t fall in love with Scarlet Johansson’s voice conversing with us, even having a little phone sex with us?)
In Ex Machina, Alicia Vikander played the lovely robotic lady who left Caleb, the young programmer, in the locked mountain retreat to go out in the world on her own.

So, what does the future of AI hold for us? Will we be able to include in their AI psyches a set of 21st Century Laws that will circumvent any harm they might do to mankind? Will these machines, or very human robots, continue to help mankind reach for the stars, or will they subjugate us to do their bidding? I don’t know. But it seems that this AI consideration is right on our doorstep. Will we be ready for it or is it already too late? See? Exhilarating in the possible positives of AI. Terrifying in the possible negatives of AI.

No comments:

Blog Archive