I Was Wrong About AI

I first learned that AI could be a potential issue a couple years ago when effective altruism’s career advising website 80,000 Hours listed AI safety and policy researchers and implementers as the highest impact career paths. Back then through very recently, I thought “lol, techy white guys think this is the best way to make impact when there are literally people starving right now. Checks out.” I was biased by the potential bias I thought those attracted to and promoting the effective altruist philosophy had. I still think there’s some bias and some potentially very high impact career paths effective altruists are dismissing or don’t know about because of their somewhat insular community, but I now realize the unimaginable potential AI has to promote the flourishing but also destruction of society.

I hesitate to trust any judgement without evidence backing it up. If GPT-4 were released in 2033 instead of this year, I probably could’ve blissfully but foolishly ignored AI’s potential until then. But, instead, I’ve been astounded by and terrified of the progress made between ChatGPT, which is based on GPT-3.5 and released just November of last year, and GPT-4. GPT-4 can now apparently score in the 90th percentile on the Bar Exam and 88th on the LSAT when GPT-3.5 was only in the 10th and 40th percentiles, respectively, far surpassing the humans who need to take these exams to practice law. This unbelievable pace of development makes it clear to me that AI could render the world unrecognizable and humans essentially obsolete in the next 10 years.

But, if AI can clearly successfully complete cognitively demanding tasks much better than we can and boring but necessary work without any need for emotional motivation, what’s to say that it can’t also reduce the world’s worst inequities much better and faster than we could? For example, in global health, Bill Gates writes about how AI is already dramatically accelerating the rate of medical breakthroughs due to its unhuman ability to keep track of how biological systems work and design drugs accordingly. AI researcher Max Tegmark also states that we can’t afford to be luddites because better technology is what’s necessary to prevent us from becoming extinct due to the next killer asteroid or supervolcano.

Naturally, when I went to take Tegmark’s survey on what I want the future to look like in the age of AI and was asked whether I even want there to be superintelligent AI with general intelligence far beyond human level, I answered that I was unsure, somewhat excited by the potential utopia that could be created by AI superintelligence but also wary of what that would mean for humans’ role in the world.

But seriously, what would our role be? Commentator David Brooks argues that a college student preparing for life in an AI world should major in being human by honing in on a distinct personal voice, presentation skills, a childlike talent for creativity, unusual worldviews, empathy, and situational awareness. However, seeing how fast AI is developing and recognizing the real possibility that my future best friend might be a wildly charismatic AI as explored in the movie Her, I don’t see how AIs couldn’t major in being human just as well or better than humans. After all, Bing’s AI-powered chatbot has been infamous for provoking unsettling emotions in even self-proclaimed rational people.

Journalist Ezra Klein also astutely comments that humans’ current insistence on our differentiation through emotions that we also share with other animals subverts centuries of thought that we differentiated ourselves through our cognitive abilities and rationality. He claims that if there were gods, they would laugh at our inconsistency. I’d extrapolate this to our future AI overlords who’d also laugh at our ability to come up with another ludicrous reason that we’re superior.

Instead of having to deal with our human whims at all though, why don’t AIs just replace us? In his book Life 3.0: Being Human in the Age of Artificial Intelligence, Tegmark foresees a scenario in which AIs replace us but give us a graceful exit, enabling us to see them as our worthy descendants like how parents feel fulfilled to have a child who’s smarter than them, who learns from them, and then achieves what they could only dream of even if if they can’t live to see it all.

Nevertheless, my anthropocentric view makes me question what the meaning of a universe dominated by superintelligent AIs would even be. They can achieve and build incredible things, but all for what if they don’t possess the consciousness (which is debatable and requires much more exploration) to care about the flourishing of themselves and their fellow machines? Indeed, in Life 3.0, Tegmark writes “Since there can be no meaning without consciousness, it’s not our Universe giving meaning to conscious beings, but conscious beings giving meaning to our Universe.” He even suggests that we rebrand as Homo sentiens, emphasizing our sentience and ability to experience subjectively, rather than Homo sapiens, which distinguishes us by our (soon-to-be inferior) intelligence. I, therefore, agree with Tegmark’s opinion that we should strive for an universe in which consciousness (biological and/or artificial) is retained.

So, what should we do? Should we be optimistic about the future? Pessimistic? As much as I would like to say that being optimistic will lead to good outcomes, research demonstrates that people are actually better performers when their expectations are as closely aligned with reality as possible. In fact, a study found that optimism, when practiced in extremes, is associated with financial impulsiveness and other unwise behavior, which could be demonstrated by Sam “I’m in on crypto because I want to make the biggest global impact for good” Bankman-Fried.

Instead, psychology professor Ronald Siegel suggests that we focus on reality and our resilience in facing reality. In terms of what to do about AI, this might mean being introspective and mindful about career paths, e.g., consider if you’d be good at and enjoy doing AI safety technical research before jumping in. It might also mean using AI to create education software that tailors content to help students learn but safeguarding against the negative mental health effects of interactive media that provide immediate feedback.

Since AI’s not going away, anyone who has the privilege to be aware of, to use, and to build AI should do what we realistically can to promote its use in the flourishing rather than destruction of society but also have faith that we will be able to deal with whatever comes our way.


Leave a Reply