The views expressed by contributors are their own and not the view of The Hill

Are you ready to be replaced by ChatGPT and other AI programs?

Photo by Lionel BONAVENTURE / AFP) (Photo by LIONEL BONAVENTURE/AFP via Getty Images
This picture taken on Jan. 23, 2023, in Toulouse, southwestern France, shows screens displaying the logos of OpenAI and ChatGPT, a conversational artificial intelligence software application developed by OpenAI.

The rise of “woke culture” in liberal circles has become a topic of much debate and controversy in recent years.

One of the most troubling aspects of woke culture is the way in which it promotes a culture of censorship and intolerance. In many cases, individuals and groups who hold different views are not just disagreed with, but actively demonized and ostracized. This kind of divisive rhetoric can only serve to further polarize society and make it more difficult for people to find common ground.

Another issue with woke culture is the way in which it often relies on a form of moral posturing that is both self-congratulatory and intellectually lazy. Many of the people who claim to be woke simply repeat the same slogans and buzzwords without actually taking the time to engage with the complexities of the issues at hand. This kind of shallow thinking is a poor substitute for real critical thinking and analysis.

Perhaps the most concerning thing about woke culture, however, is the way in which it can lead to a kind of groupthink that stifles intellectual diversity and creative thinking. When everyone is expected to conform to a certain set of ideological principles, it becomes very difficult for new ideas and perspectives to gain a foothold. This can be incredibly detrimental to the long-term health of our society.


While I agree with what you’ve just read, I didn’t write it. Not any of it. Not a single word of it.  An artificial intelligence (AI) program — ChatGPT — wrote those paragraphs. All I did was tell it to write an essay on “the crazy, liberal woke culture … in the style of Bernard Goldberg.” Do I think it came out the way I would have written it? No.I think the AI version is stiff, even lifeless.  But just between us, for a computer program, it ain’t bad.

And it’s why I wonder: Will AI make us lazy? Will we let artificial intelligence think for us?  Why spend a lot of time writing a term paper or column or a speech, or even a book when you can get an AI program to do it for you?

Still, I’m a big fan of what AI technology can do, and has done, for all of us — how it’s made us more productive. Thanks to AI, we have speech and image recognition, search engines, translation services, safety functions in cars, and a lot more. Once we had to run down to the public library to look things up. Now it’s right there on our computers. Thank you, Google.

Growing up, my father, a blue-collar worker, would talk, with some trepidation, about “automation” — the word that was used before “artificial intelligence” caught on — and how this newfangled concept someday might cost a lot of workers (like him) their jobs. Well, “someday” is here. AI robots are replacing workers in more than a few industries — they’re more efficient and cost a lot less. They don’t take sick days or lunch breaks, and they don’t need health insurance or pay raises.  

According to one report, “48 percent of experts believed AI will replace a large number of blue- and even white-collar jobs, creating greater income inequality, increased unemployment, and a breakdown of the social order.”

Way back in 1968, in the movie “2001: A Space Odyssey” we got a glimpse of what could happen when an AI program develops a mind of its own and decides that it no longer will take orders from humans. In the movie, “HAL,” the AI program with a frighteningly calm voice, decides to kill two astronauts who want to pull the plug on him (or it). Could something like that actually happen? Could an AI program outsmart the smart humans who created it? Should we worry about that — or was it simply a spooky sci-fi movie?  

But HAL did leave me a little uneasy. So does the idea that, thanks to an AI program called Historical Figures Chat, you can “talk” to all sorts of notable figures from history, including Adolf Hitler and his top lieutenants.

According to one news report, “The app’s version of Heinrich Himmler, the chief of Nazi Germany’s SS and an architect of the Holocaust, denied that he was responsible despite his  well-documented role.”

Less cringeworthy, but still of concern (to me, anyway) is the fact that you can instruct a sophisticated AI computer program to write a song that sounds like Bob Dylan wrote it, and the program can do it. And, if you ask nicely, it can write a sonnet in the style of William Shakespeare. Will the product be as good as that from Dylan or Shakespeare? 

Maybe not yet. Maybe, let’s hope, not ever. But AI technology is getting better every day. That’s why I find our brave new world of artificial intelligence more than a little scary. We know what it has done. What we don’t know is what, someday, a brilliant but bloodless AI program might do — to all of us. I wonder what HAL would think about that.

Bernard Goldberg is an Emmy and an Alfred I. duPont-Columbia University award-winning writer and journalist. He was a correspondent with HBO’s “Real Sports with Bryant Gumbel” for 22 years and previously worked as a reporter for CBS News and as an analyst for Fox News. He is the author of five books and publishes exclusive weekly columns, audio commentaries and Q&As on his Substack page. Follow him on Twitter @BernardGoldberg.

Tags Artificial intelligence ChatGPT digital age robotics

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

More Technology News

See All
See all Hill.TV See all Video

Most Popular

Load more


See all Video