@Isabelle, one of our readers asked me to start this topic. Exploring AI has been at the back of my mind ever since @unkp started that fun thread to talk to a chatbot. So what do people think about it?
When I think of AI, my first thought is Stanley Kubrick's 1968 film masterpiece 2001 Space Odyssey, which spans the length of human existence from the pre-historic moment humans start using tools, to when the ultimate tool, a computer running a space ship, tries to take over and kill the crew. If you have never seen this film masterpiece, it is even more relevant today than it was in 1968.
The second thought I have about AI is that Stephen Hawking says AI is dangerous if not controlled, and I don't know how humans will ever be able to control it. We can't seem to control virologists from creating deadly viruses that can jump from bats to humans. Someone out there will always want to make an even more powerful machine.
Of course, AI can do amazing things, and will likely be able to cure diseases, solve the climate change crisis, and much more. Also AI made NotMilk, the only plant based milk product that satisfies my milk cravings. So thank you, AI.
But whether or not I am concerned about the future of AI, the cat is out of the bag.
Am interested in others' thoughts about it. Positive, negative, any thoughts and/or intuitions?
@jeanne-mayell I see AI as much like the discovery of fire, enormous potential for Good ,saving lives, making life better as well Evil,Destruction... killing, maiming and causing great pain. It all depends on how one uses it.There will aways be... the ones who use it for Good, Higher Purposes... and well as those... who misuse it and cause great harm. TBS? It is... here... that horse has already left the barn.
I am actually quite concerned about the development of AI. While it could have beneficial effects, it also has the potential to disrupt life as we know it and play havoc with financial and job markets, infiltrate banks & brokerage houses, co-opt medical databases and compromise our individual security & privacy even more than we are already facing while we helplessly stand by.
As we know, companies like Microsoft, Google, IBM and others are now market leaders, have spent a fortune developing this technology, and are, in effect, in an 'arms race" to develop the most advanced AI as quickly as possible -- yet there is virtually no current regulation or legal/ethical oversight in this area which is desperately needed. We have already unleashed the genie and it will be impossible to put it back in the bottle! Google has already acquired DeepMind which is a computer-based 'neural network' which mimics the short term memory of the human brain. It is only a matter of time before it advances further. Perhaps one day (in the not too distant future) AI may qualify for 'sentience" in a legal sense? It's algorithms may become so densely coded and sophisticated that, for all intents and purposes, it may become, virtually indistinguishable from your best friend, your husband or your neighbor! K Pop's hugely successful SuperKind band now has 4 real human singers and one pink-haired AI singer who is indistinguishable from the others -- and the band regularly sells out huge stadiums to its fan base. ChatGPT's chatbot is taking off like crazy... yet, as of now, thankfully, it is still fairly unreliable in its recognition of patterns and making correct predictions...but will no doubt be continuously upgraded & refined. Law firms are now using AI in low-level E-Discovery/Document Review but the incorporation of AI into law firm life may become inevitable since rote memory, the seeking of fact patterns and making logical deductions encompasses the true strengths of AI.
I can foresee, in a worse case scenario, humans creating their own built-in obsolescence, being completely out-maneuvered/out thought by self-learning, self-generative, amoral AI to which we are, exponentially, unable to compete and becoming, in effect, 'second class citizens' and little more than 'domesticated pets'. Anything a human can do, in time AI will be able to do FAR BETTER, in a millisecond of processing time and far, far cheaper too. And, vitally, unless we require the mandatory embedding of ethical & moral subroutines/algorithms and human values to be built into AI as it advances, we may be dealing with a new kind of dangerous, uncontrollable, sociopathic technology which is built upon pure logic, efficiency and deduction but which omits the crucial factors of 'human values/compassion/life affirmation' which are uniquely human and to which we owe the advancement of human civilization. In short, we may be left far, far behind and our values may become, in effect, extinct.
So where does this lead us? It's possible part of society may eventually break away and form "Luddite" communities, getting back to the land, using barter, growing their own food and spurning the Digital Life which has suddenly become ubiquitous and unmanageable.
I do not mean to upset anyone here. I do not consider myself to be an 'intuitive or psychic'. I am only making predictions based upon what I have been reading. Are there any 'intuitives' among us who have feelings about this potentially fundamentally transformational step that mankind is about to embark upon?
https://en.wikipedia.org/wiki/DeepMind
I literally just watched this about an hour ago and then came here and saw this new thread. I'm going to link to this video on the subject by Adam Conover. He's a well-known comedian who does (or used to do) a video series called "Adam Explains Everything". I'm not saying he's an expert, but he does raise some really good points about the current state of AI. Just a word of caution, he does use a lot of swear words. ;-)
https://www.youtube.com/watch?v=ro130m-f_yk
Please forgive me if I sound so negative. My fears may not be proven true!! AI developers may begin to realize the ethical responsibilities they have, step in and deliberately shape AI for exclusively beneficial purposes such as drug discovery and genomic research. My point is that I feel we are at a crossroads now in our human evolution -- only now we are crossing a major step from exclusively 'organic life forms' to entirely new types of, potentially, sentient, inorganic life forms. To me, it feels like a social transformation of the most profound kind and we are in the earliest stages. Let's see where it goes?
All I know is I recently watched the movie M3GAN and it scared the crap out of me. I think there is potential for good and also for things to go very wrong.
I believe this is a clear case where unimpeded Capitalism must be reined in asap. The AI area must become highly regulated. There must be a groundswell of response from the public in terms of demanding that all AI research STOP for now (actually, there currently is a demand from top AI experts demanding a 6 month "moritorium" to re-consider/re-evaluate the longer-term ramifications of AI research -- but that time period is way too short! Should be something like 5-to-10 years of forbidding AI research to continue until further safety protocols are devised. Please see below. We need to call our Senators/Congressmen and demand a termination (for now) of continual AI advancement...as well as in depth research into its profound social implications!
A Sci fi TV show with the theme AI robots and what happens when a scientist give 5 of them human feelings and others want to exploit that. This could become a scenario in the near future, who knows what’s going on behind the scenes. I watched this recently, it’s very good, thought provoking. It’s much more complex in its story line than the above suggests.
https://m.youtube.com/watch?v=BV8qFeZxZPE
https://en.m.wikipedia.org/wiki/Humans_(TV_series)
Humans is a science fiction television series that debuted on Channel 4. Written by Sam Vincent and Jonathan Brackley, based on the Swedish science fiction drama Real Humans, the series explores the themes of artificial intelligence and robotics, focusing on the social, cultural, and psychological impact of the invention of anthropomorphic robots called "synths". The series is produced jointly by Channel 4 and Kudos in the United Kingdom, and AMC in the United States.
Regards to all
PS don’t think I want that type of future
I’m much more optimistic about AI. I don’t have the developed psychic and intuitive abilities that many on this forum have, so I am interested in hearing those views. I grew up with both scary AI scenarios like 2001 Space Odyssey and Skynet in Terminator, but also friendly and helpful AI’s like R2D2, Data in Star Trek, and KITT from Knight Rider.
AI is largely just a tool and as with any tool can be used (and programmed) for good or ill. Any time we’ve had a major advancement in technology from steam engines to microwave ovens to airplanes to computers, there have been those who rallied against the new technology as the end of civilization or at least the biggest threat to our way of life. This turns out to be mostly fear of change and fear of things we don’t know about. I remember in my lifetime all the fears about computers and automation taking over people’s jobs. That has happened in some industries, but new jobs and industries have also been created. And for most of us, computers actually help us do our jobs better and more efficiently.
I might be wrong, but I don’t see the current state of AI to be one where the AI is self-motivated for good or ill. It basically does whatever the programmer or user asks. Chat bots may be able to convince you they’re human (and that might be scary to some), but they still only chat or carry out tasks like gathering information (even incorrect information), making recommendations, creating art, or writing. They don’t have control over [name your fear – medical databases, the stock market, the military, your car, your house], unless they are installed in and given authority to make those decisions by humans. I would think that most current AI’s have limited control over the tangible world around them – except perhaps those installed in vehicles, robots, and automation systems, but they still have limited reach and do what they are asked or programmed to do.
Certainly, there’s room for regulation and even human oversight, but the same could be said of trains and other things. How many train derailments have we had within the past few months? AI’s probably will continue to get smarter and will be used more and more in society, so it’s probably still a good idea to start thinking about these things now. At the same time, many regulations and safety precautions will be industry and application specific. An AI personal assistant or chatbot may need to be more regulated for personal privacy whereas an AI designed for self-driving cars might be more regulated for road safety.