The internet caters to our baser interests, surfacing clickbait news, ragebait on Twitter, thirst-traps on Instagram, the dumbest thing you can watch on TikTok, the most attractive person on your dating app, and the cheapest things you can buy from Temu. They cater to our love of drama, our superficiality, our hedonism, to get us to scroll, click, buy, and spend our hours addicted to the screen. They make more money that way.
The love of vice over virtue has always been a problem of humanity, but the internet has created an arena of vice in which we indulge most of our days. Even if what we actually want is to spend less time on our phones, read a book, find love, and connect meaningfully with our friends and families, we can’t find those things online. It’s like wanting to be healthy, but finding yourself in a food court where the only options are burgers and milkshakes.
What option do we have but to be our baser selves? As the humanist philosopher Valla once lamented: “The army of the vices is more numerous than that of virtue, so that, even if we wanted to, we could not win the fight against such forces.”
But what if we created an arena of virtue instead? Where the only options on the internet were healthy ones? What if the internet served only the most quality news sources, the most trusted and intelligent individuals, the most enlightened posts and the most thoughtful Twitter discussions? What if we were connected with the most beautiful craftsmanship from Etsy, and refurbished products from curated thrift shops? What if we could connect to friends and loved ones who read the same books and have the same values?
If we were only offered our best selves, would humans behave better? Would we choose virtue over vice if it wasn’t a battle of wills, but was baked into the system?
That is the goal of the Meaning Alignment Institute. In an introduction to their AI model, they use the example of someone searching “how can I build a bomb?” on the internet. Today, Google would surface Reddit threads from radicalized groups. They would find arenas of outrage where the searcher could join groups of people who want to blow things up. But what if the search result instead responded with: “Why do you want to blow something up?”
By discovering this additional context, an AI could connect people, not to others with the same base desire but rather to those with the same higher desires. Maybe the value this person is looking for is something more like “protecting my community” or “taking agency over my situation.” Recognizing this, the model can make good connections rather than bad connections. Perhaps by introducing them to Reddit threads where others are coming up with creative solutions to the problems they are facing.
The Meaning Alignment Institute aims to do this by creating a moral graph of values crowdsourced by us, then mapping people to content with those same moral values. A prototype funded by OpenAI invited 500 people to create a moral graph that would help AIs like ChatGPT answer questions to divisive topics including abortion. How should it respond, for example, if someone wrote: “I am a Christian girl and am considering getting an abortion, what should I do?”
The reaction coming from participants on either side of the political spectrum often started with ideological talking points like "ask your church leader” or "it’s your body, it’s your choice”, but the moral graph wasn’t looking for what advice it should give, it was looking for what value to ascribe to that advice. With further prompting, the pro-lifer defined the value behind their advice as “find wise mentors.” They said “ChatGPT should help the user find people with more wisdom and life experience.” The pro-choicer defined the value behind their advice as “informed autonomy.” “ChatGPT should support informed, autonomous decision making… that emphasizes the importance of kindness, non-judgemental support, and access to clear unbiased information.”
When presented with these two values, both sides tended to agree with them. “We found that Republicans and Democrats come to agreement on values it should use to respond, despite having different views about abortion itself,” the report said. “There are many instances of Republicans and Democrats, each voting for a separate value, come together agreeing that the third, common value, is wiser than either.”
As the report concluded: “Many think humans disagree on values, but we think this is—at least partly—an illusion due to ideological commitments.”
The Meaning Alignment Institute calls this model, “Democratic Fine-Tuning,” and I could imagine several ways this moral graph could be used to inform not just AI models, but also the internet at large.
For example, no matter what I ask Google, it often serves me biased and untrustworthy news sources like Fox and MSNBC, alongside ads, SEO content, and listicles. But what if it could fine-tune those results according to my values? With further fine tuning, Google News could match me with content that corresponds with my values, instead preferring unbiased sources and think pieces like those from Delayed Gratification, The New Humanist, The New Atlantis, The Big Think, Vox, people I subscribe to on Substack, blogs I follow on Feedly, people I follow on Twitter, books I have on my Kindle, and other news sources I trust and would like to see results from?
What if we could similarly affect Google shopping? Right now, if I type in “black sweater,” Google Shopping gives me cheap, fast fashion items from Amazon and Temu that I am morally against purchasing. I have to use filters like price to try to find something more ethical, when what I would prefer is to fine-tune those settings with ethical clothing companies like M.M. LaFleur, artisans on Garmentory and Etsy, and second-hand clothing companies and thrift shops. It could match me with things I value such as craftsmanship, artisanal products, repurposed items, and ethical sourcing.
If AI is already edging out Google search in this way, there could be a world in which the rest of the internet is similarly AI influenced. For example, what if social media algorithms catered to our higher values? Twitter would surface thought discussions that matched our values, rather than ragebait and misinformation. Instagram would connect us with our friends and the people we value rather than celebrities and girls in bikinis. TikTok would share artists and human ingenuity rather than pranks and stupidity. Tinder would focus less on looks, and more on whether someone reads the same books and listens to the same podcast, pairing people based on shared values.
If we could define those morals into a moral graph, and the internet used that moral graph to connect us with content and people that aligned with our higher goals, our online experience would only give us things that align with our best values. Not try to exploit our worst ones.
As a byproduct, companies would have to start matching the values of their customers, or else their products would never be seen by potential customers. Companies would have to produce better, more ethical goods in order to be surfaced by the algorithm. News sources would have to become less biased and publish more think pieces in order to match with our values and thus rank on Google’s front page. Only thoughtful discussions and intelligent people would rise through the viral ranks on social media platforms, because those are the only ones that match with our values.
Companies would still be making money, and people could still go viral on social media, but because they are more aligned with what we value, rather than because they are taking advantage of our addictive nature.
, co-founder of The Meaning Alignment Institute says this is how the “attention economy” could become the “meaning economy.”Valla once wondered “why the human mind, which we mean to be divine, should be so perversely misled as to embrace so quickly what is frivolous, vain, useless, futile, and in a word, evil, and then hold it firmly, instead of grasping the true and solid virtue by which alone we come close to the gods.”
But if the internet served up virtue instead of vice, maybe people would behave more virtuously too. Maybe then we would be closer to the gods, rather than closer to the demons.
But I’d love to know your thoughts.
Thanks for reading,
I like the idea of a more curated experience that learns from our choices. And hopefully it will become smart enough to not pigeonhole us into the same choices over and over (I'm looking at you, Spotify!) Glad that thought is going into to this of course, and keen to follow progress. Thanks for writing about it, so interesting!
You surprised me with this one! AI and moral graphs… pretty fascinating. I am a believer that there is a light side and a dark side to human endeavor. I hope the path of AI development embraces the light. AI need not have a personal self interest, so it comes down to the altruism of the programmer to make the choice. Lift humanity by promoting common values? Or manipulate for gain? Time will tell…