Should we be afraid of AI?
But the biggest fear that haunts our digital scientists is the eventual loss of control over AI
For the past few weeks, because of the Hatch Project of the Rotary Club of Makati and the digitally inspired project submissions of so many gifted young Filipinos, I have come to appreciate vivid examples of how AI can do wonders and potentially change people’s lives. But, on the other hand, as many as there are shouting hallelujahs for the immense value that AI can contribute to society, there are as many as those who are ringing loudly the alarm bells of the dangers of AI.
The most notable critic ironically comes from a 75-year-old British-Canadian, Geoffrey Hinton, who is an acclaimed computer genius, a recipient of the Turing Award, the Nobel Prize of Computing, and generally referred to as the Father of AI. YouTube and Google are replete with articles about Hinton for his work on neural networks, large language models, and machine learning which basically enables a computer to mimic how a human brain works. However, with the recent advances in the utilization of backpropagation — an algorithm designed to test for errors working back from output to input nodes which improves the accuracy of predictions in data mining and machine learning — the digital brain has proven to function more effectively and more swiftly than the human brain. Simply put without the technical jargon, it is a computer program that can out-think and outsmart humans. Hinton’s recent retirement from Google, prompted by his fear of the consequences of further unimpeded advances in digital intelligence, has paved the way for his numerous highly publicized interviews regarding what he firmly believes are the existentialist risks that AI poses to humanity.
Bottomline, Hinton is certainly no madcap scientist and his very pointed comments need to be taken seriously. If his warnings aren’t good enough to convince you, only last March, well over 27,000 technology leaders and researchers including Elon Musk, Tesla and Twitter Owner/CEO, and, another Turing awardee, Dr. Yoshua Bengio, the developer of the GPT-4 technology that has opened the floodgates of concerns, signed an open letter warning that the current AI technology presents a “profound risk to society and humanity” and asking for a halt on all researches on AI for six months to enable a thorough study on how to manage and mitigate the risks, if at all still possible.
What are these risks? The most immediate and realistic risk that confronts all of us is disinformation. Since the program uses natural language, generates realistic images, and can converse persuasively like a human, it will be almost an impossibility, without vigorous facts and sources checking, if what we read and view on television, online messages, social media, and anything on print is accurate or not, or is being conveyed by a human or a computer program. The GPT program is not foolproof and has been acknowledged as a limitation by its developers to sometimes “hallucinate” or generate inaccurate or biased output. Think of what can happen in an election marred by an intentionally hallucinated AI. Or a despondent individual seeking medical and/or emotional support unknowingly being led to an erroneous path.
Another very real risk is job loss. All businesses will naturally seek to cut operating costs so first likely to go will be all the rote functions that can easily be taken over by AI. But rapid advances in digital intelligence could also soon take over the more demanding tasks such as paralegals, personal assistants, writers, and translators. Think of the massive unemployment and social disorder that this will trigger.
But the biggest fear that haunts our digital scientists is the eventual loss of control over AI.
As more governments, politicians, businesses, AI researchers, non-profit organizations, and ordinary people uncontrollably plug into these large learning programs, all this data, in turn, feeds into a greater and larger library of information and patterns for the AI systems to analyze and absorb, providing these systems considerable computing powers. And eventually even allowing these systems to write their own codes. When that happens, control over AI could be forever lost which according to Hinton could signal the end of humanity.
So, should we be afraid of AI? I think the answer is obvious. But like any tool that Man invents, it can and should certainly be used for good but we also need to be wary that it can be very much also used for bad. In the end, it will be Man who will determine the outcome.
Until next week… OBF!
For comments, email [email protected]
Read more Daily Tribune stories at: https://tribune.net.ph/
Follow us on social media