Imagine a world where machines can outsmart humans, where robots can do jobs that were once only done by people, where computers can understand and respond to human emotions. This is not a science fiction story, it's the world we live in today.
Artificial Intelligence is no longer a futuristic concept, it's a reality that's changing the way we live, work, and interact with each other. But as we stand on the brink of a new era, we are faced with a difficult question: should we embrace this new technology and all its benefits, or resist it and risk being left behind?
The answer is not simple, and it requires us to think deeply about the implications of AI on our society, our economy, and our humanity.
In this post, we will explore the promises and the perils of AI and ask ourselves, "Are we ready to adapt to the new standard or will we ignore it at our own peril?"
What if I told you the introduction above was generated by the ChatGPT language model from OpenAI? How does that make you feel? Are you impressed? Feeling duped?
Let’s take a look at another application of AI. Here, we see an AI playing a simple enough game: tic-tac-toe.
AI is remarkable in its ability to replicate tasks and function within a pre-programmed set of rules. With neural networks and learning algorithms, AI can even take in operational information and improve upon its own algorithms.
However, what happens when we don’t provide AI with sufficient guidelines? As you see here, the AI "thinks outside the box" and completes the task in an unanticipated – and incorrect – manner. It is the unanticipated aspects of AI that warrant further consideration rather than just blind adoption of the technology. Countless movies warn us of the malevolent possibilities of AI, from iRobot, to Terminator, to 2001: A Space Oddyssey.
Despite such cautionary tales, AI is already prevalent in a lot of our existing technologies. For instance, a simple application is the spam filter within our email inbox that saves us from the tedious task of sorting through unwanted emails – however imperfectly, as anyone who has ever received an email explaining how they’ve just won $10,000 from a foreign prince would know.
Another common and increasingly common application of AI is in natural language processing. Think of helpful – or otherwise – chatbots on sites like Amazon or any other online retailer. A more cutting edge application of natural language processing is in software that analyzes both voice and text data – think calls or emails – and pulls out key points to summarize for business use.
Finally, facial recognition has been a growing use of AI over the past few years. Anyone who has recently flown through a major airport has probably encountered the facial recognition screens at the security checkpoint.
This is just a small sample of already functioning AI applications across many industries. Actual uses, both current and future, are nearly limitless.
Let's address the current climate surrounding AI with a quote from William Gibson, one of the most influential science fiction authors of the last century and the man credited with coining the term "cyberspace." His writing has focused on the intersection of technology and society.
“The future is already here – it’s just not very evenly distributed.”
What a succinct way to summarize the current situation. Which brings us to the main idea of this post: embracing or resisting AI.
A historical example that provides reasons we should embrace new technologies is that of the automobile. When the "horseless carriage" first came on the scene, it was met with skepticism and resistance from many, but ultimately the automobile brought significant change to society, the economy, and the way we all live our lives.
The same can be said for AI and its potential to change how we work, communicate, and access information. The fears people had about change, potential loss of jobs, and general uncertainty with the introduction of the automobile were valid, but were alleviated by the increased mobility it brought about. Given enough time, we have come to see it as essential to daily life.
On the cautionary side of the scale, consider a cautionary thought experiment. Performance enhancing drugs are not currently allowed in any major sport – though that doesn’t mean they haven’t been used by a few illegally from time to time – think MMA, baseball, cycling.
The question is what would happen if performance enhancing drugs were ever legalized in competition? There would be some who would refrain from using PEDs for moral reasons or to protect the integrity of the sport, but if a not insignificant percentage of competitors began to use such drugs, it would really only be a matter of time before you had to either join in or be left behind. Despite the headlines and scandals of doping in cycling through the years – and the continued illegality of the practice – an estimated 20-90 percent of professional cyclists still use PEDs. Clearly, the potential advantages are just too great to pass up for many competitors.
Might AI present a similar conundrum in the everyday workplace?
This topic has increasing relevancy because of its increasing ubiquity in our everyday lives. AI language processing and machine learning has now progressed to the point where it is allowing for the automation of legal research, the research and writing of essays indistinguishable from human-generated writing, predictive text like you see on your phone when typing out a text, and translation of complex texts.
Less than a decade ago, the conversation around AI and machine learning focused on the potential for displacement in industries such as truck driving, taxiing, and even aviation. Now, it seems, the AI industry has its sights firmly set on white collar positions. With the growing sophistication of services such as ChatGPT to research and generate original text, it isn't a leap to consider language processors generating copy, slogans, content, and other traditionally creative outputs.
AI image generators are also generating buzz as we consider the possibility of displaced artists and photographers. Even engineering isn't immune, where improving machine learning algorithms can literally mean that a coder writes code that displaces him or her from their own position. In the civil engineering world, AutoCAD Civil 3D now has a grading optimization feature that uses automation to take a first pass at optimizing the cut and fill balance of a site.
Clearly not all of these advances represent replacement-level technology. Many applications for AI software will occur as supplements to daily tasks, such as in the grading optimization example where an experienced engineer will still complete the detailed grading of the site. Similarly, content generation from AI may simply function as a starting point that generates discussions from the marketing department about which path to pursue.
But what if we start seeing ubiquitous AI content on the internet? What if an AI-generated image wins an art contest – as one did at the 2022 Colorado State Fair – over deserving human-made pieces? Should we be required to disclose this information or create a separate division? Should we require that AI-generated content carries some sort of identifier? If we can't even tell that it was artificially generated, some will argue, then what's the issue?
These are questions that we must confront in the coming decades. Clearly AI is here and – like doping in cycling – it is here to stay. We must collectively decide how to retain and protect our unique creative spaces and abilities as humans while embracing AI technologies where they legitimately make life simpler or easier. The key will be to make AI work for us and – unlike in so many science fiction films, to one degree or another – avoid becoming subservient to it.