
5 minute read
WE WILL FIX IT LATER: AI AND APATHY
ChatGPT was quietly released to the public on Nov. 30, 2020, to a population consisting mostly of people entirely ignorant to the existence of the product. However, this blissful ignorance would not last long - less than a week, actually.
Within 5 days, ChatGPT would reach 1 million registered users, becoming the fastest growing app in history. To put that in context, it took Facebook over a year to reach 1 million and in 2010, Instagram reached it within 2.5 months.
This new chatbot AI is a type of large language model that uses generative pre-trained technology, and was refined to use stored knowledge by solving a problem and furthering the ability of what it learns to solve problems outside of the initial scope.
OpenAI, the developers of the product, spent years of refinement to turn it into a chatbot, a form of Artificial Intelligence few people had much experience with, though the use of Artificial Intelligence as Virtual Assistants such as Siri has been widely known and used since its release.
Within a few months, this previously unknown product had employees of nearly every industry with a deep unease in how it was going to change their lives.
The first to really raise the alarm was that of academia, with good reason. The seemingly perfect recreation of the nuance of human language and knowledge was overwhelming, introducing the reality to many professors that the years of anti-plagiarism and anticheating resources developed over the years had been totally erased.
Considering the text produced by the predictive model was generated uniquely and not just a copy from a sourced material, the AI had become what linguist Noam Chomsky describes as “high tech plagiarism.”
I think this is an extremely good description of it in relation to academia and work. ChatGPT was trained from what is on the internet already, which is the effort and work of actual people, nothing it produces is a unique or original idea, as it is nothing more than a prediction based off the design it was given.
This fear soon began spreading to other professions, including copywriters, paralegals, programmers and even those in the medical and legal field, once the AI had passed multiple state medical and legal practitioner licensing exams.
After this initial period of concern, things shifted to the topic of the optimization of productivity that could happen as a result of this new technology.
The coverage of AI over these two months was a roller coaster, but there was one aspect that seemed to be missing and that would be “does this roller coaster have breaks? Or seat belts?”
In typical fashion, when introduced to a revolutionary technology we seemed to focus on generally positive ways to adapt the technology and whatever passing insights into potential dangers were fleeting. Facebook is a good example of our fascination with novelty that leads us to don horse blinders, shielding society from the actual ugliness of what is under the hood.
The content moderation teams, all crammed in a large office space for 8 hours a day, 40 hours a week, were confronted with some of the most grotesque aspects of humanity.
The internal documents showing that the high level executives at the company knew their messaging app was being used for genocide in Myanmar against the Muslim Rohingya, but did not cut service off as it could hurt their bottom line, and of course the issue we deal with on every platform: misinformation and disinformation.
This likely will be the area that will be the most destructive in the quickest amount of time. Because of the ability to create mass amounts of disinformation, false news articles, accompanied with deepfake technologies at scale, giving them all a unique voice and style, we will find ourselves in a constant cat and mouse of trying to decipher what is real and what is not real.
Societies all over the world have already been greatly impacted by much less sophisticated bot networks and even human networks who can not produce on a scale anywhere close to what generative AI is capable of, and this is a issue that we are still behind on and seemingly have no desire to pass any sort of regulatory oversight on these companies.
The high potential for loss of human agency is a harsh reality of another mounting problem we face in the near future that will require urgency at a time when we are facing a climate crisis that we still seem to imagine as happening in some futuristic period, despite the devastation already being caused to large areas in the global south.
AI is also being developed in the private sector, with billions of dollars going into research of new technologies by many companies, with many different ethical standards and many different types, and layers of bias that are rooted in AI programs.
As recently as 2021, Facebook was once again having to explain away how and why their facial recognition AI did something as despicable and grotesque as tagging photos tagged a video of several Black men as “primates.”
This also happened to Google back in 2015 and took 3 years to fix, and Amazon’s facial recognition misidentified 28 members of congress, most of whom were minorities, as convicted felons.
These are problems that have been known and called to be regulated for nearly a decade now and there has hardly been any movement at all, to the point where four countries keep blocking an international law on the restriction of AI offensive weaponry used during war.
Malware and ransomware has seen a huge spike in the last five years, going so far as to disrupt the oil supply of the entire east coast of the United States.
In the first year of the pandemic, hundreds of hospitals had attacks that locked their systems out, resulting in direct payments to the attackers of nearly 20 billion dollars.
These types of malware will be much more accessible to far less sophisticated programmers, considering the ability already displayed by ChatGPT in the field of programming.
Less than one month after release, there had been three incidents of malware written by the program being sold on the dark web. Identity theft, romance scams, large scale fraud and cases of libel will all become much more frequent in our society if we continue to let companies only care about their shareholders, constant growth and profit.
A society having a philosophy of focusing on growth of economy as the most predominant aspect has created a growing gap between the amount of cuts that are being inflicted and the bandages we have to use.
It is really important to think of the benefits we might gain from something like ChatGPT, outside of the realms of economic productivity and how it will positively improve our lives in a non-material way that does not result in the outsourcing and erosion of uniquely human skills.
It is something I have thought of for over a month now and struggle to come up with one that would have a great impact on the world that I could endorse confidently, knowing that all the potential dangers are a means to an end.
This should not be interpreted as a call to abandon these new technologies and stick our heads in the mud when it comes to progress around the world.
It is merely a request that before private companies unleash these society-altering technologies into the world, there is a set of boundaries and supervision as to determine what use is acceptable.
This adds an aspect of consent in a world of nearly 9 billion people who will have this alter their lives in some manner, before a small group of like-minded people with very little diversity and perspective make this widely available without any assurances of responsible use.
BY MICHAEL HESSELEBEIN