There is a lot of hype these days about AI and how it will (or won’t) destroy human civilization. Personally, I foresee none of the extremes hyped in mainstream media manifesting, although I do expect that AI will lead to great innovations and to egregious crimes.
Why I Am NOT Afraid of AI
First, what the media calls “AI” isn’t actually intelligent. The generative AI systems like ChatGPT, DALL-E, and Midjourney are all very good at taking what they have been fed and regurgitating mash-ups of the original content when given prompts by humans. Some of the work created is truly awe-inspiring, but impressive as it may be, it is not going to replace the need for real artists any more than synthesizers replaced the need for real musicians. And as much as Hollywood and assorted publishers may think they can replace human writers with generative AIs, no computer on the planet is capable of writing something like any of the Greek Tragedies, Hamlet, One Hundred Years of Solitude, The Sun Also Rises, or The Left Hand of Darkness.
To further illustrate what I mean, take a look at the video embedded below. This amazing artist takes modern music and plays it on traditional Chinese instruments. The arrangements are subtle and nuanced in ways that generative AI can’t match. Similarly, anyone who has been to or participated in a musical jam session knows the vitality of the sound that flows when musicians use their instruments in a communal creative expression.
One positive way that generative AIs can be used, however, is to help humans parse large amounts of data, get past writers block, or consider a color or thematic style they would not have otherwise. The more I personally get to know the capabilities of LLMs, the more I find myself using them to do quick recon missions to snag information that would take me much longer to find and would probably send me down rabbit holes I really don’t have time for. But I’m using these LLMs as a tool to help me with my writing. I am not giving up my personal agency as a writer. Instead, I’m using them as a way to manifest what I want to say in a better, faster way.
Similarly, when I’m researching something I find that I’m turning to ChatGPT more often as a way to get to the core information I’m looking for. Don’t get me wrong – I fact check and verify what ChatGPT gives me ALL THE TIME. Usually, it has good hits. The bad misses, however, can also be spectacular, but so can Google searches. The point, however, is that if someone has the ability to think critically, they can use LLMs as a way to get to relevant information faster and more effectively than through other, more traditional means.
With all that said, however, there are things about LLMs and other big data systems that frighten the bejesus out of me.
What DOES Frighten Me About AI
The biggest problem I see with generative AI is that it takes the challenges we already have regarding big data and kicks them up another order of magnitude or two… or three. We’ve already been having trouble with how big tech companies like Google, Facebook/Meta, Microsoft, etc., manage and mismanage data about their users, clients, subscribers, whatever. Big Tech has already proven that they don’t respect the individuals whose data they routinely hoover up and analyze so they can turn around and manipulate those same individuals into doing things against their own self interest. And don’t even get me started on data brokers like Experian, Equifax, TransUnion, Acxiom, CoreLogic, and so on.
Now let’s add the ability to use an even easier interface like ChaptGPT to rend, tear, spindle, and mutilate all that data, parsing and analyzing it for user profiles, trends, or other analytics. Yeah, it’s a nightmare in the making.
These are just a few ways that generative AI can be abused:1
- Data Amplification: Generative AI models require vast amounts of data for training, leading to increased data collection and storage. This amplifies the risk of unauthorized access or data breaches, further compromising personal information.
- Data Inference: LLMs can deduce sensitive information even when not explicitly provided. They may inadvertently disclose private details by generating contextually relevant content, infringing on individuals’ privacy.
- Deepfakes and Misinformation: Generative AI can generate convincing deepfake content, such as videos or audio recordings, which can be used maliciously to manipulate public perception or deceive individuals. (Elections, anyone?)
- Bias and Discrimination: LLMs may inherit biases present in their training data, perpetuating discrimination and privacy violations when generating content that reflects societal biases.
- Surveillance and Profiling: The utilization of LLMs for surveillance purposes, combined with big data analytics, can lead to extensive profiling of individuals, impacting their privacy and civil liberties.
Again, these issues are not new. I remember thinking years and years ago about how dangerous it could be if a radical group got their hands on the purchasing history data for Amazon. It would be a very short step from there to profiling people to identify “undesirables” who could then be persecuted or killed.
Similarly, false advertising and misinformation is as old as humanity itself. The only difference is that now we can generate it more easily than ever before, and AT SCALE. This ability could easily be leveraged by unscrupulous people to control the masses by controlling the information they receive. The possibilities make Big Brother look like a boy scout.
The upshot here is that, like any technology, it isn’t the technology itself that poses the risk. It is the potential for HUMANS to use that technology against each other in a malicious manner. Again, not something that is a new issue. In the hands of a skilled surgeon, a scalpel saves lives. In the hands of a serial killer, that same scalpel ends them. The scalpel itself is neither good nor evil. It is the use to which it is put that matters.
The Bottom Line
The AI genie is definitely out of the bottle, and, like the invention of the horseless carriage, this new set of innovations will likely prove to be highly disruptive. Before long, specialized GPTs or their kin will handle low level tech support, help you make restaurant and travel reservations, or manage your stock portfolio. That will displace phone bank workers around the globe, and likely within a surprisingly short time.
And as AI becomes more adept at discerning things that matter to us humans, this more sophisticated software will be paired with hardware to do things like prepare fast food, deliver packages and pizzas, vacuum carpets in offices and hotels, and, sadly, engage in warfare. On the plus side, this has the potential to allow more people to pursue things like higher education, space exploration, and scientific research. But the risk also remains that this technology that could set humans free from drudgery and toil could also be leveraged to enslave their minds.
I don’t know which way things will evolve or devolve. I can only try to help people understand the potentials for good and for harm, and to work with others to prevent the worst case scenarios from happening as best as I can.
- Full disclosure: I used ChatGPT to help me compile this list. ↩︎