38 Comments

Random quick reaction to this really thought provoking exercise (thank you as always): I have a very hard time imagining this coming up under capitalism because it’s a system that seeks out the ways to make the most money through the least effort (if you look at its tendency to standardize, scale, cater to the most widespread common denominator). Of course, a lot of companies make good money with more difficult craftsmanship and services. But these tend to be more boutique, don’t require the massive investment to get a flywheel like Google going. Will whoever invests all the money and resources into building this more enlightened version of Google want/be able to wait out companies dragging their feet and resisting until the very last possible second before they change their business approach and pay to place on the “meaning” (ie, high effort) algorithm? I hope this made sense

Expand full comment
author

Oh, I should have responded to this comment instead of the other one. But I have the same response. Couldn't it be just as profitable to cater to our better values?

Just to give another exmaple: People don't recycle when there isn't a recycling program that's easy to use. But they do when there is! We might be missing out on huge markets because we are limiting ourselves to trying to cater to what people don't want more than what they do!

Expand full comment

Yes, but I meant more effort on the side of the supplier—customers want to reward those who give them a better choice, but rarely have I met a business truly willing to put in the added work for it. I imagine for a lot of categories the return on effort shrinks dramatically when you provide true quality, maybe because true quality is very difficult to scale.

Expand full comment
author

Hmmm yes I see your point. For example: there's a reason why our phones don't last. Etc. etc. I know some regulation has happened to try to fix planned obsolescence, but a West Elm chair I bought five years ago just fell apart causing me to exclaim to my husband "I'm only buying things that are 100 years old from now on!" Regulation might not fix that.....

Expand full comment

I find very unique the way that you are thinking and I'm so pleased that I have found you.

I read your thought-provoking article and here is my opinion: Humans have always been and will always be imperfect.

We would always try to reach God but fail in this physical garment. We are made of him and we have a divine part but we have also a dark part. This dark part always triggers us to do something wrong. Even in a perfect society, we would try to escape. Priests secretly drink, smoke, or swear.

The wrong is always attractive and it can teach us important lessons too. I'm not praising darkness, vices are very destructive but one way or another we all go through them.

So let's not pretend they don't exist.

Best wisdom comes in fact from those hurtful experiences. You can meet amazingly humble people, they all have one thing in common: they have gone through something really bad and they have survived to tell their story to the world, how their courage and their self-forgiveness helped them come back to the straight path.

AI cannot make people wise, only those experiences from hell and extreme adversity can make people infinitely rich in consciousness to not repeat the same mistakes. Isn't that what real progress is?

Expand full comment

I love the concept but I’m not convinced that an AI LLM could differentiate between my values and morality and someone else’s. We also know that people regularly make choices in conflict with their values, and other people don’t have the foundation to make moral choices. I’m not sure AI could help them. I did a lot of values clarification work with others in the 1970s and when I stopped it was because I wasn’t convinced people could get that much clarity. Could AI help. I’m not sure!

Expand full comment
author

We definitely know that people make choices in conflict with their values. But then why show them the things in conflict only? I don't believe the "because it sells better" line. Because we do make good choices when we have good options. And I do think our algorithms could better cater to that.

Expand full comment

My answer if I wear the hat if a single-minded capital maximizer is that the things in conflict are much easier to monetize, while higher values are harder to uphold. It’s equivalent to catering to the system 1 part of the brain vs system 2

Expand full comment
author

I wonder if that's true though? For example, if there was one healthy place in the airport, I would eat there. But because there is only fast food I have to get a bag of nuts at the news stores. They are getting much less money from me by not having something I want. Now that some airports have healthy options, they seem to be really popular!

Expand full comment

I agree that the algorithms could do a much better job catering to our best, rather than our worst, instincts. I would love to see it!

Expand full comment

I didn’t mean to sound so negative about moral AI. I would love to see it succeed, but the challenges are enormous.

Expand full comment

I hope it's okay to post this here. I wrote a piece called "Can Robots Help Us Write Better Poetry?" on the subject of how we can use AI to actually help us grow as artists and people, expanding our knowledge and connecting to an even wider community of thinkers and thoughts. I think AI is a tool, like any tool it can help us grow and become wiser, or it can help us become lazier, etc. It depends on how we use it. Here's my article. https://treshathepoetrysaloncom.substack.com/p/can-robots-help-us-write-better-poems

Expand full comment

I absolutely love this. AI, like everything, is a tool that we can use to do good or to do harm. But I think it is something that we, the users, must consciously ask for. We need to tell the makers it is what we want and then refuse to use it to do harm... which is hard. It's like choosing to go to a drive through and order the salad instead of the burger and fries. But, if I understand this correctly, it's possible that AI could evolve to show us the salad first and not even offer us the burger and fries? How do we control it or ask for AI to give us only the healthy options?

Expand full comment
author

Right! Hoping everyone will go through the drive through and order a salad is not the answer.

Expand full comment

You surprised me with this one! AI and moral graphs… pretty fascinating. I am a believer that there is a light side and a dark side to human endeavor. I hope the path of AI development embraces the light. AI need not have a personal self interest, so it comes down to the altruism of the programmer to make the choice. Lift humanity by promoting common values? Or manipulate for gain? Time will tell…

Expand full comment
Apr 9·edited Apr 9Liked by Elle Griffin

In your own small way, you're helping the Internet make us less stupid, as is everyone reading this piece and commenting on it. To that end, I don't think it's an either or, because while the thought experiment is a nice one, to your point in quoting a 15th century humanist, we're not going to get around the fact that vices are more attractive than virtue when they provide limbic solutions to complex problems.

Another interesting question would be: how do we sublimate our "vices" in such a way that we can still pursue them (for various reasons that also include the simple thrill of subversion) without contributing to a more vicious society? I'd like to think we're somewhat on our way. Even if most people buy "organic" because they've been successfully marketed it / virtue signalling is extremely effective in hyper-capitalist society (which is often also devoid of a collective spiritual sense of meaning), at least people now know what organic means. It's a slow process, indeed. But the Internet will forever make us stupid AND smart. Oh, tremble! 'Tis the Age of the Great Oscillation.

Expand full comment
author

Interestingly, I think we already have, or maybe we do over time? The 15th century vices Valla was concerned about were brothels, gambling dens, as well as gluttony, greed and excessive wealth and hoarding. These things all still exist, but we have shifted the way we participate in them. "Sublimating" them in different ways.

Expand full comment

I really do wonder in regards for brothels, specifically, why as a society we've decided that having professional establishments to cater to sexual desires is worse than forcing it behind closed doors / turning it into an unregulated industry. France had them until 1946, and as Guy de Maupassant's "Madame Tellier" short story suggests (19th century), often times brothels were less about the sex than about a community space for a substantial part of the population to be able to have a drink and interact without shame. I also just saw "Poor Things" and it was a romanticized notion, of course. My opinion is colored by that, surely, but in the USA specifically, sex is everywhere and everpresent, and yet is still considered a vice by many unless one is "in love."

Expand full comment
author

Right, I do wonder what the world would be like if brothels were legal and regulated and thus more of an "at will profession" rather than an often coerced profession.

Expand full comment

That's a pretty fascinating question. How do you sublimate your vices.... I wonder if there is a healthy outlet for every perceived vice. Like, when you feel rage you go out and chop wood instead of committing violence... I'd love to see more of that regardless.

Expand full comment

Right? I know for one that a lot of my most aggressive instincts found sublimation via video games in the past, as well as in physical competition. I guess one way to think about it, however, is why we think of vices as vices. Generally speak there's a moral component, and I certainly don't think it's BAD to feel angry, only how we choose to express that anger. Any cursory listen of Beethoven suggests he probably got mad at the piano sometimes.

Maybe a 1st step would be to understand why we consider some things vices and others not from a moralistic / cultural standpoint, and then create a framework for how to confront simple vices: struggle with addiction? get addicted to a healthy physical exercise. struggle with jealousy? go to a sex club. struggle with gambling? gamble with your arm hairs. and so on.

Expand full comment

People aren’t inherently bad or good, virtuous or vicious. The society in which we live influences our behavior greatly. It’s not even about AI, it’s about having morals, ethics, values by which we as a society abide. The internet is a reflection of what we believe. If our beliefs change, the internet will change. Like you described in your article, the algorithm was based on peoples’ values not vice versa. What will it take for us to change our values?

Expand full comment

I'm not sure that is true. I mean, I do believe that we are inherently ... something. I don't think good/bad quite covers it. Humanistic Psychologists would say that we are "growth oriented" and we will grow towards whatever our loved ones tell us to grow towards in order to be part of the community. If your parents raise you to hate a group of people, you will hate them. If they raise you to love others, you will love them. It's more likely that we can influence others from the outside in than vice versa. IMHO.

Expand full comment

That's exactly what I wrote, we are a product of our environment.

Expand full comment

Absolutely. I'm not disagreeing with you. What I mean though is that I think it's easier to influence the external things that shape people's view points than it is to ask people to change what they believe internally and then make decisions that reflect that externally. i.e. it would be easier to create a t.v. show that demonstrates a utopian world and subsequently inspire Utopia, than for people to want Utopia and then demand a better t.v. show... although it's not so much a "which came first, the chicken or the egg" question. I guess the chicken creates the egg and the egg goes on creating the chicken... Anyways, thanks for engaging with my comments. It's an important issue.

Expand full comment

I believe that everyone has the ability to think of a better future. We need to get into the habit of acting as a collective and not wait for others to think for us. There won't be a hero saving us, rather it will take all of us working together to change things. Thanks as well for the exchange.

Expand full comment

Agree! At the same time though, most of us need the support. I think that's part of what AI could do. It COULD put us in touch with the people who are writing more positive, useful, pro-social, uplifting articles, as opposed to putting us in touch with the articles that promote hate. In the modern world we like to think we all can pull ourselves up by our bootstraps, but the fact is that most people need help doing anything, including doing what is right for themselves and others. I would very much like to see AI put more positive things in my feed and eliminate more junk so I don't have to work so hard to make better choices. I also want McDonalds to replace all of its french fries with carrot sticks... but I'm not sure everyone else wants to agree with me on that.

Expand full comment

In my vision, thinking for oneself and taking action does not exclude collaborating with others. Some people can think how to make the search algorithms better, others can work on a utopian film, and yet other plants trees, invent climate friendly technology, design less wasteful fashion. There is so much work to do! That's why I think that we need to change our values, from a consumer and exploitative society to a society that values nature and working for the greater good, less focused on material possession and more focused on reaching that higher human consciousness that we all talk about.

Expand full comment
Apr 9Liked by Elle Griffin

I like the idea of a more curated experience that learns from our choices. And hopefully it will become smart enough to not pigeonhole us into the same choices over and over (I'm looking at you, Spotify!) Glad that thought is going into to this of course, and keen to follow progress. Thanks for writing about it, so interesting!

Expand full comment
founding
Apr 9Liked by Elle Griffin

This is good thinking.

Expand full comment

a timely (and fun and existential) video on the ai question: https://www.youtube.com/watch?v=0EtxkkabRUw

Expand full comment

and... interesting project from the Meaning Alignment Institute! If you follow it over time, it would be great to hear more about how it's going.

Expand full comment
author

I'll keep an eye out for sure. And thanks for the video!

Expand full comment
founding

It would be interesting to see search results based on more contextual input. I definitely prefer the conversational aspect of ChatGPT instead of the predictive text searches.

Expand full comment
author

Me too!

Expand full comment

Hey, I decided to write a post on my thoughts about this: https://philosophypublics.substack.com/p/should-ai-decide-what-we-see-online

I included the thoughts below in my post, but then do a 100 degree turn and end up porposing something similar, I think? ...

Let’s start from the assumption that we would not want our values to be imposed upon us from the outside. This is a risk we would run should we build AI out to curate our experiences on the Internet, but let’s set that aside for now. Now, imagine some value that you hold to be a value for yourself. Or even better, imagine someone that you know,and the things they seem to value. Now imagine that they act in a way that seems to go against their values. You would immediately go find a journalist so that they could write a piece about this completely anomalous situation, where a person went against their values. 😉 I’m joking, of course, I’m just meaning to say that we do tend to go against our values all the time we act in ways that are contradictory. We're complicated. We can be confused about what we value, and what we think that we value can certainly be wrong, and change over time. If we have AI reinforcing what we have told it or shown it that we value, even if we were wrong about what that is, it would be all that much harder to change course. With a technology that lacks transparency, it will be impossible to change, adjust, or otherwise control it. That pregnant woman because if she’s pregnant by definition, I don’t think she’s a girl, might think that she values some thing based on her religious beliefs but in that difficult moment, she might find the opportunity to realize that her religious beliefs are in fact not in her best personal interest, and therefore miss an opportunity for deep reflection, not only about her interests, but about the nature of her faith. (Originally written with speech-to-text, edited for clarity.)

Expand full comment
author

Well in the case of a moral graph we would be co-creating that, which I love.

Expand full comment
Apr 8Liked by Elle Griffin

This comment is a direct answer to your title not to your article which I haven't read it yet. ( I promise I will).

I thought we make AI wise with our constant notes and ideas. Let's not forget that AI was made by man to be served by man.

Some people are scared of it, fearing that it will cause nuclear war or steal people's best ideas and other kinds of baloney scenarios but I don't buy into this bullshit. I believe it's useful, especially for us writers it can help us unblock our minds at times. Yes, AI is artificial but so is the internet.

The internet was also an old concept that one day became reality and now for all is normal to use.

The internet, AI, and whatever comes in the future will never replace human connection and emotion.

I believe that as we come closer and closer together from all parts of the world, the less we will need these machines of communication.

This is my vision for the future.

Expand full comment