Process Talk with Jen: Martha Brockenbrough on Human and Artificial Intelligence

[Posted by Jen Breach for Writing With a Broken Tusk]

Martha Brockenbrough is a writer of very considerable brain. An example: her acknowledgements for Future Tense: How We Made Artificial Intelligence–and How it Will Change Everything (out today from Feiwel & Friends) are couched in a conversation with ChatGPT, in which Martha teaches the AI exactly how many people it takes to make a book. “‘Given all the work that goes into a book,’” she asks it, “‘and all the human beings required to do that work, do you still think [a book can be written] in six months to a year?’ ChatGPT was silent for a long time. And then this reply appeared, in red type: ‘An error occurred. If the issue persists, please contact us through our help center.’”

I was thrilled to speak to her about ‘all that work.’ 

[Jen] I was on the edge of my seat reading about the famous matchups between checkers and chess masters and computers. Then the 2016 Go match, where AlphaGo, an AI, beat 18 time world champion Lee Sedol in 4 out 5 matches. I was fascinated by the way you concluded the chapter: “Humans created AlphaGo. They created the 100,000 games’ worth of training data. They wrote the algorithms for learning and search that AlphaGo used to win. They sweated and tested and tinkered and tried. And yes, a champion fell. But it was human beings behind the machine.” You nail how much has been skewed by the “man versus machine” narrative. How hard, or not, was it to keep human-ness central to a story about technology?

Photo courtesy of Martha Brockenbrough

[Martha] I love this question. This was a big goal of the book—to show technology as a human thing. To show how human beings have been yearning for something like A.I. since ancient times, which fascinated me in my research. And it’s not just a “Western” thing. Many of the models and intellectual advances come from the East, which was also important for me to show. As I wrote toward the end, the arc of the material universe, the things we as a species have built, bend toward artificial intelligence. We’ve been striving for this. Now we’re at a certain threshold. What’s on the other side of the door depends on what we build and what our primary goals are. If our primary goal remains profit, then AI has a high chance of increasing wealth gaps and suffering. If our primary goal remains control (as in authoritarian nations), humans will suffer. If our primary goal is to serve living beings, then AI has a high chance of helping us be better world citizens and stewards of the planet. 

[Jen] What a choice. You are describing deep and complicated conceptual and technical science and ethics with clearly understandable thought experiments and metaphors–sandwiches, for example, to demonstrate the differences between human and computer thinking. How did you hone this craft skill?

[Martha] Thanks for saying this. When I started the book, all I knew about artificial intelligence was, loosely, what it could do. I did not know how it was built. I did not know how it worked. So, as I studied the history, I understood how human minds approached the problem, and this helped me better understand how they solved each phase. Once I could understand it, I could come up with metaphors. This is where figurative and poetic writing is so important for humanity. This is how we teach ourselves how to use language and imagery to convey meaning across complex terrain. I still remember learning simile and metaphor in third grade, and I know we’re in an era where things like five-paragraph essays are thought to be more useful than poetry or narrative writing. I know we’re in an era where Humanities programs are being cut from colleges and universities because they’re not profitable. I believe that everything is connected. Living things. Intellectual disciplines. Cultures and languages. Belief systems. When we see the underlying patterns that contain common elements, we can deepen our understanding of the world, and this is a way of thinking that lives in the Humanities. If we want deep understanding, we will give our children connections to the language, art, and history that helps form mental models that can carry meaning across the gaps that divide us. 

[Jen] Well said. In an interview you mentioned that you researched Future Tense for four years. The book includes nearly 200 footnotes! What was your research approach? How did you keep it organized (if at all)? How did you decide what to include and what to leave out?

[Martha] I wanted to understand EVERYTHING. I’d read enough about AI in the media to understand that there were doomsayers and absolute cheerleaders. I believe we are at an inflection point in human history, and I wanted to understand what brought us here—and who brought us here. That tells you a lot about who might be excluded from developing the technology and benefiting from it. So, I read lots of history and became especially fascinated with Claude Shannon, a mathematician sometimes called the ‘father of information theory.’ Podcasts and videos were also helpful to me in not only hearing voices and discussions, but also in taking courses that helped me get an introductory understanding of how algorithms work, even as I am not a software engineer. 

Nonfiction like this is often sold on proposal, so I had an initial outline that turned out to not be really workable once I’d learned more. So, I asked what the big questions would be, and then organized the book into those big parts. And from there, I broke it down into sections. In terms of keeping things organized, I’d spent so much time with the material that I kind of drafted without notes and then put in footnotes and sources after the chapter had been drafted. For better or for worse, I was trying to walk a non-sensationalistic line, because I don’t think young readers are served by too much cheerleading or too much doomsaying. I also don’t want to talk down to young readers. I have too much respect for their intellect and curiosity.  

[Jen] That respect is so clear, and so is your measured, comprehensive look at all sides of the subject. In the book, you mention Robert Heinlein and Isaac Asimov’s groundbreaking science fiction writing. Did you read any topical sci fi while researching or writing FUTURE TENSE? (I am thinking specifically of Ted Chiang’s short stories “The Life Cycle of Software Objects” and “Exhalation.”) If not, what were you reading (apart from hundreds of nonfiction books and articles about technology)?

[Martha] I read Klara and the Sun by Kazuo Ishiguro, and quite enjoyed it. The movie Her also really stuck with me; although director Spike Jonze calls her an operating system and not an algorithm, he’s portraying artificial general intelligence in an interesting and compelling way—along with the relative puniness of the human brain once we achieve a certain level of computing power. Ted Chiang is amazing, and I tend to agree with his assessment that AI as it’s conceived is going to concentrate wealth among the already wealthy and harm workers. Capitalism + AI is not a good formula, because capitalism centers capital, and algorithms do what they’re optimized to do. When profit is the aim and not human flourishing, well, humans will not flourish. 

[Jen] It’s astonishing that all that innovation and development truly does come down to such basic ethical questions. A timely, important, illuminating book, Martha, thank you. 

Previous
Previous

Guest Post: Letting Characters Lead the Way by Saumiya Balasubramaniam

Next
Next

Meet Jen Breach