Skip to content

Editors

Harry Cassin
Publisher and Editor

Andy Spalding
Senior Editor

Jessica Tillipman
Senior Editor

Bill Steinman
Senior Editor

Richard L. Cassin
Editor at Large

Elizabeth K. Spahn
Editor Emeritus

Cody Worthington
Contributing Editor

Julie DiMauro
Contributing Editor

Thomas Fox
Contributing Editor

Marc Alain Bohn
Contributing Editor

Bill Waite
Contributing Editor

Russell A. Stamets
Contributing Editor

Richard Bistrong
Contributing Editor

Eric Carlson
Contributing Editor

Generative AI: The problem of ‘mere information’ versus values

In June, we’re told, ChatGPT’s website generated 1.6 billion visits by at least 100 million users. Generative artificial intelligence is suddenly mainstream, creating written content, images, audio, and video.

Why are so many people using generative AI? Because creating content is hard work, and having AI do it is a tremendously helpful shortcut.

But it raises ethical questions, and the biggest is accountability.

While humans provide the prompts to AI, the results come not from any identifiable human source but from how the algorithms synthesize pre-loaded information.

That information consists of billions of packets of stored “intelligence” in quantities far beyond our individual reach.

AI developer Anthropic said this year its ChatGPT-like Claude AI language model can analyze an entire book’s worth of material in under a minute. Not just read the words, which would take a human at least five hours, but digest the words at a deep level.

For example, Anthropic uploaded the text of The Great Gatsby and modified one line to say the narrator, Nick Carraway, was a software engineer working on a machine learning tool at Anthropic.

“When we asked the model to spot what was different [from the original text], it responded with the correct answer in 22 seconds,” Anthropic said. The program can also interactively respond to questions about the text.

But here’s the problem. Generative AI responds to nearly all prompts equally — without a value judgment or analysis of context. The humans providing the prompts can get whatever answers they want.

For example, I asked ChatGTP why hiring family members is bad. It cited conflicts of interest, potential incompetence, damage to workplace morale, lack of diversity, complicated communications and feedback due to familial dynamics, legal and ethical concerns, and problems when terminating family members.

Then I asked ChatGTP why hiring family members is good. This time it cited enhanced trust and loyalty, shared values and vision, long-term perspectives, lower turnover, informal communication, cultural continuity, efficient onboarding, and diverse skill sets.

I did the same experiment with swearing — asking why swearing is bad and then why it’s good — and got similar results.

It’s bad because it can be offensive, disruptive, violative of social norms, unprofessional, aggressive, harmful to children, and maybe illegal.

But swearing is good, ChatGPT said, because it promotes emotional release, social bonding, expressiveness, humor and creativity, authenticity, and stress reduction.

In both cases — hiring family members and swearing — ChatGPT warned me that context matters. But the algorithm doesn’t contextualize. Instead, it responds to prompts using information it has digested.

Another problem is how generative AI handles that information. When it responds to prompts, it generally doesn’t cite sources or tell us how they’re weighted. Were the reasons for and against swearing from the Bible or Mad Magazine?

We all understand that a tool to instantly produce high-level, well-expressed content is a tremendous time saver. When used in the right situations — where the goal is to find and impart information — there’s nothing wrong with it (if we accept that the information it has digested is reliable).

But our lives are guided by more than information. Values are what finally direct our decisions and behavior. As a small example, people choose to read the New York Times or the Wall Street Journal because of the different values guiding each publication. We trust them as sources of information because we trust their values.

Removing the human element from content creation deprives the consumer of something essential — the ability to know the creator’s values. When ChatGPT responds to my prompts, I don’t know who’s doing the talking, for what purpose, or by what authority. I’m left with information unmoored from values.

Values determine how we navigate the interests of others. They transcend mere information and require a special kind of wisdom, a dimension beyond the scope of generative AI’s capabilities.

Share this post

LinkedIn
Facebook
Twitter

Comments are closed for this article!