Nana Sledzieski, April 11, 2021
This month, the Sloan School of Management at MIT published a blog[1] offering seven lessons for creating successful machine learning projects in organizations. Sara Brown highlighted a 2015 decision by the US Patent and Trademark Agency to analyze the data in over ten million patents the office had amassed in almost 220 years. The thing that caught my attention was that the artificial intelligence also analyzed the decision-making process of the patent examiners, to support training opportunities. This month, Kate Crawford also wrote about AI in Nature[2], where she argued for legislation that protects society because “unproven artificial-intelligence tools” are being pushed into workplaces and schools under the “pretext” of checking the emotional states of remote children and employees during the pandemic.
Together, these two articles got me thinking—again—about the growing prevalence of affective computing. Will the developing discipline eventually help to bring peace to fractious societies? Or will it lull communities into a dystopian coma, as represented in Lois Lowry’s The Giver?In that novel, which won the Newbery Medal for contributions to American children’s literature, two twelve-year old children feel “stirrings” of emotion, which disrupt a utopian society where negative memories have been whitewashed.[3] The overall premise is that emotional pain is an important part of living a complete and robust life. But how much pain do we need to feel alive?
This is a nascent debate, weighty with ethical concerns. Yet, I often wonder if AI can be used to help heal the negative emotional experiences that lead children to become violent adults. In 2017, a team of researchers from Australia and Spain published an article in Frontiers in Psychology about a five-year randomized controlled trial for youth experiencing early psychosis.[4] They used natural language analysis and chatbot technologies as ongoing—rather than short-term—interventions designed to capitalize on the general attraction of youth for technology. Automated content was delivered to youth in social media newsfeeds, along with human interventions. The clinical aim was to guide emotion disclosure, individualize therapy, detect the onset of anxiety attacks, and reduce social isolation. The long-term effects of this process are particularly intriguing because there is a growing body of research to see if neural networks can be revised after previously adverse childhood experiences. I’m curious, what do you think are the pros and cons of using artificial intelligence to heal society?
[1] Brown, “7 Lessons to Ensure Successful Machine Learning Projects.”
[2] Crawford, “Time to Regulate AI That Interprets Human Emotions.”
[3] Lowry, The Giver.
[4] D’Alfonso et al., “Artificial Intelligence-Assisted Online Social Therapy for Youth Mental Health.”