Thank you Elle for this great essay. I’ve definitely been meaning to check out these books! Also, I came to this essay because I wanted to hear your voiceover that you’ve been working on. It really does sound like you!
I actually read the piece along with your narration. I imagine you made a few edits to the text after running it through your voice model since there were a few changes words and sentences. But what I’m really curious about is your experience with emphasis. In your text, there are a few places where you have italicized words, yet your voice model kind of breezes right by these. Do you know if there’s a way to have it pick up on italics in the text?
And the audio has been a fun experiment! Yes, I did fiddle with the text a bit so it won't always be aligned with the audio, and the audio does seem to skip italics. However, I can make adjustments if I need emphasis. For example, in another article I wrote something like "It took juuuuust enough effort" and it prounounced that "just" the way I would have said it.
Also, sometimes I have to tweak text to make it pronounce words correctly. For example, it spells out acronyms. So when I've written DOGE and ESOP, I had to re-write them as Doge and E Sopp so it would pronounce those acronyms the way we do. I'm also learning about titles and subtitles, because I can note those and it will leave a better spacing after the title.
That’s interesting that you need to push the model like this, but it makes sense. That was my main observation from listening to the text: I was wondering if the model could capture your voice away from the pure calm of regular reading! :)
Such a smart, layered piece. What it really made me sit with is how every worldview — anthropocentric, ecocentric, sentientist, steward — boils down to one core tension: how much comfort do we believe we’re entitled to, and at what cost? This world is so fascinating because it reveals that even “sustainable perfection” can feel suffocating. Maybe it’s not just harm we’re trying to minimise, but... meaninglessness? And that comes from choosing with clarity, not compulsion.
I could have written a whole other post about the existentialism throughout these books. Dex struggles with meaninglessness, perhaps as a result of their apathy that all humans can do is cause harm. But the robot helps them reverse that sentiment. One of my favorite quotes:
Dex: “It doesn’t bother you? The thought that your life might mean nothing in the end?”
Robot: “That’s true for all life I’ve observed. Why would it bother me?”
Thank you Elle for this great essay. I’ve definitely been meaning to check out these books! Also, I came to this essay because I wanted to hear your voiceover that you’ve been working on. It really does sound like you!
I actually read the piece along with your narration. I imagine you made a few edits to the text after running it through your voice model since there were a few changes words and sentences. But what I’m really curious about is your experience with emphasis. In your text, there are a few places where you have italicized words, yet your voice model kind of breezes right by these. Do you know if there’s a way to have it pick up on italics in the text?
Anyway, lovely work here Elle!
Thanks Jeremy!
And the audio has been a fun experiment! Yes, I did fiddle with the text a bit so it won't always be aligned with the audio, and the audio does seem to skip italics. However, I can make adjustments if I need emphasis. For example, in another article I wrote something like "It took juuuuust enough effort" and it prounounced that "just" the way I would have said it.
Also, sometimes I have to tweak text to make it pronounce words correctly. For example, it spells out acronyms. So when I've written DOGE and ESOP, I had to re-write them as Doge and E Sopp so it would pronounce those acronyms the way we do. I'm also learning about titles and subtitles, because I can note those and it will leave a better spacing after the title.
That’s interesting that you need to push the model like this, but it makes sense. That was my main observation from listening to the text: I was wondering if the model could capture your voice away from the pure calm of regular reading! :)
Such a smart, layered piece. What it really made me sit with is how every worldview — anthropocentric, ecocentric, sentientist, steward — boils down to one core tension: how much comfort do we believe we’re entitled to, and at what cost? This world is so fascinating because it reveals that even “sustainable perfection” can feel suffocating. Maybe it’s not just harm we’re trying to minimise, but... meaninglessness? And that comes from choosing with clarity, not compulsion.
I could have written a whole other post about the existentialism throughout these books. Dex struggles with meaninglessness, perhaps as a result of their apathy that all humans can do is cause harm. But the robot helps them reverse that sentiment. One of my favorite quotes:
Dex: “It doesn’t bother you? The thought that your life might mean nothing in the end?”
Robot: “That’s true for all life I’ve observed. Why would it bother me?”
Wow yeah so profound!
Great stuff! So true that the most luxurious experiences are often less interesting.
Efficiency bores us. Your story exemplifies this. A fully automated world be the worst what could happen to us. To struggle is to be human
I don’t think an automated world = boredom. Idleness does.
We should live in a world where we can choose how to spend our idleness. I’d rather choose to do my own sewing than have to work more.
Terrific post. Thought provoking as hell. Now I’ve got to get my hands on some novels by Becky Chambers.
My only regret in reading all her books is that there are no more new ones left. I envy you! Savour them.
Thank you! And I hope you do! She's a master.