Broussard also recently recovered from breast cancer, and after reading the fine print of her electronic medical records, she realized that AI played a role in her diagnosis — something that is increasingly common. This discovery led her to conduct her own experiment to learn more about how good AI is at diagnosing cancer.
We sat down to talk about what she’s discovered, as well as the challenges with police use of technology, the limitations of “AI justice,” and the solutions she sees for some of the challenges AI creates. The conversation has been edited for clarity and length.
I was struck by the personal story you tell in the book about artificial intelligence as part of your own cancer diagnosis. Can you tell our readers what you did and what you learned from this experience?
I was diagnosed with breast cancer at the beginning of the pandemic. Not only was I stuck inside because the world was shut out; I was also stuck inside because I had major surgery. When I was looking at my map one day, I noticed that one of my scans said: This scan was read by artificial intelligence. i thought Why did AI read my mammogram? No one told me about it. It was just in some obscure part of my electronic medical record. I became very interested in learning about current AI-based cancer detection technologies, so I designed an experiment to see if I could replicate my results. I took my own mammogram and ran them through an open source AI to see if it would detect my cancer. I discovered that I had many misconceptions about how AI works in cancer diagnosis, which I explore in the book.
[Once Broussard got the code working, AI did ultimately predict that her own mammogram showed cancer. Her surgeon, however, said the use of the technology was entirely unnecessary for her diagnosis, since human doctors already had a clear and precise reading of her images.]
One of the things I’ve come to realize as a cancer patient is that the doctors, nurses, and healthcare professionals who supported me through my diagnosis and recovery were so amazing and so important. I don’t want some sterile, computational future where you go and get a mammogram and then a little red box says It’s probably cancer. When we’re talking about a life-threatening disease, it’s not the kind of future anyone wants, but not many AI researchers have their own mammograms.
Sometimes you hear that once the AI bias is sufficiently “fixed,” the technology could become much more ubiquitous. You write that this argument is problematic. why?
One of the big problems I have with this argument is that somehow AI can reach its full potential and that this is a goal that everyone should strive for. AI is just math. I don’t think everything in the world should be governed by mathematics. Computers are really good at solving math problems. But they are not very good at solving social issues, but they are applied to social problems. Such an imaginary endgame Oh, we’re just going to use AI for everything this is not the future I am signing up for.
You also write about facial recognition. I recently heard the argument that the movement to ban facial recognition (especially in the police) is hindering efforts to make the technology fairer and more accurate. What do you think about it?
I’m definitely in the camp of people who don’t support the use of facial recognition by the police. I understand that this discourages people who really want to use it, but one of the things I did while researching the book was to dig deep into the history of police technology, and what I found was not encouraging.
I started with a great book Black software by [NYU professor of Media, Culture, and Communication] Charlton McIlvaine, and he writes about IBM’s desire to sell many of its new computers at the very time we had the so-called War on Poverty in the 1960s. We had people who really wanted to sell cars, looking for a problem to apply it to, but they didn’t understand the social problem. Fast forward to today, and we are still living with the disastrous consequences of the decisions made back then.