Today, we are in a very different age than when I first looked at a computer printout in 1965. Information, disinformation, and “infoshum”—or “information noise” that aims to maximize clickability while minimizing actual information—is generated and disseminated in a mind-moment over our laptops and cell phones.

We are in a global race around the development and deployment of AI technologies, driven to a great extent by capitalistic and power-hungry interests as well as our own inherently competitive mindsets; the greatest benefits accruing to the individual, corporation, or country that dominates the race for political, military, economic, or ego power.

Some headlines are asking about another race as well. A recent Harvard International Review article begins with the headline: “A Race to Extinction: How Great Power Competition Is Making Artificial Intelligence Existentially Dangerous.” In the midst of various rushes to adopt burgeoning technology, one critic has even asked, “how can we fill up the depleted reserves of trust and reason in our world as we see the debilitating impact of the digital world on our individual and collective psyche?”

It is clear to many of us that we have a moral responsibility to understand and inform ourselves to the extent that we can about the benefits and burdens of AI. As many are pointing out, persuasive technology is quickly degrading our sovereignty, commodifying our identities, and is further handicapping the agency of the so-called “useless class.” 

Basically, AI is a double-edged sword. For example, Marc Andreessen, in his AI manifesto, seems assured that AI is the panacea for everything from child care to warfare. On the other hand, various leaders in the AI field [have been warning that AI could usher in the end of the human race as we know it].

Both might be right, with deepfakes, privacy violations, racial and sexist algorithmic biases, AI priming market volatility, autonomous weapons, fraud and disinformation, social surveillance and manipulation, and identity commodification. AI hallucinations can lure one down the rabbit hole of a potentially dangerous and deluded unreality. 

The slipperiness of AI points to an interesting Buddhist perspective: the illusory nature of the phenomenal world.

There are also extraordinary benefits to AI, including accurately diagnosing various diseases, protecting biodiversity, dealing with issues related to climate change, predicting the spread of infectious diseases, detecting guns in schools, helping the speechless speak, slashing energy consumption, and more.

In sum, we are in a new information landscape primed in part by humanly flawed data, compounded by extractive economic interests, and furthered by adolescent drive, with not much thought to the social, environmental, moral, and ethical harms that we are currently experiencing and which are bound to be even more problematic in the near and far future. 

In the mid-1980s, I watched a startling interview with a top general in the US military. It was about the deposition of nuclear waste. In the segment, he made it clear that, from the dawn of the US’ embrace of nuclear power and materials, there was never a plan in place for how to get rid of radioactive waste. Today, most of us know that nuclear waste is piling up around the world with consequences that are truly scary to consider.

My sense is that we are not in a dissimilar situation with AI, regarding how to deal with the exponential and pervasive accumulation of fractured-trust-waste that is contaminating our society and psyche. There doesn’t seem to be a clear plan forward in place for recovering the broken trust caused by AI, and a lot is at risk. Yet, we can still shift this narrative before it is too late.

You may be asking yourself, what does AI have to do with Buddhism? Curiously, the slipperiness of AI points to an interesting Buddhist perspective: the illusory nature of the phenomenal world. What can we really believe in, after all? What really is real? From the Diamond Sutra

“Thus shall you think of all this fleeting world: A star at dawn, a bubble in a stream; a flash of lightning in a summer cloud, a flickering lamp, a phantom, and a dream.”

Yet most of us need to think that what we experience is not entirely a dream in order to properly function in our society. Imagining the world as fixed makes us feel secure, even though, from a Buddhist perspective, reality seems to be just as slippery as AI. In the midst of this slipperiness, however, rest issues related to serious harm and deep suffering.

I have to ask myself: Can Buddhist teachings really serve in terms of mitigating the harms generated by AI? Can meditation help? Are we too far gone? Will this just be the landscape where we do charnel ground practice in the future, charnel grounds being a place where bodies are left above the ground to putrefy, and this is where we practice? 

There have been some Buddhist suggestions that we might mitigate the harm caused by AI through introducing into AI the ethos inspired by the bodhisattva vow to awaken and end suffering. Some have even proposed the slogan “intelligence as care” to try to revise the current definition of intelligence and to point to a better way forward.

I ask myself: can there be such a thing as artificial wisdom or artificial compassion? Maybe yes, maybe no. If yes, then perhaps AI could be created to include within its framework a Buddhist ethic of virtue. Frankly, I think not, and the idea to reframe intelligence as care might be seen as too little too late.

Another Buddhist perspective that comes to my mind is the term “appamada,” usually translated as vigilance or conscientiousness. Stephen Batchelor translates it as care. Appamada means to live by the vow not to harm, to be diligent about this commitment, and to be heedful of when we engage in harm, and choose to correct course. Could appamada be trained into AI systems? Or is appamada up to us? Can we bring the spirit of appamada, vigilance, conscientiousness, and care into how we approach AI as developers and consumers? One also has to ask: how? I think this is what Tristan Harris and his team at the Center for Humane Technology are endeavoring to do: inform us so we can make informed and sane choices and be responsible providers and consumers of the technology. 

From philosopher and neuroscientist Francisco Varela’s perspective on the enactive view, AI is already embedded in the context of our lived experience; it is coextensive within all aspects of our lives, whether we are accessing technology or not. In fact, it is not so much a matter of our accessing it: it has accessed us. It is now a part of our psychosocial biome, whether we know it or not, or like it or not. To put this more nicely—in Thich Nhat Hanh’s words—from the point of view of codependent arising, we inter-are with AI; our views have colonized it, and it is colonizing us.

What might Buddhism offer in the midst of the tsunami of AI development? What we have to offer is subtle but important. I appreciate what Roshi Norman Fischer has called the “bodhisattva attitude,” an unshakable attitude of clarity that reflects our very character; this is our stance, our point of view, it is the internal atmosphere that colors our way of seeing the world, of seeing reality: it is an attitude saturated by appamada, by conscientiousness and compassion. As global citizens and consumers who care, we have an important task before us. Amid the bombardment of persuasive distractions on our mobile devices and elsewhere, we are called to give attention to what we are doing and to ask truly why we are doing it and how this enables the extractive, capitalistic self-interest that is driving the wholesale development of AI.

We also need to remember that evolution has endowed us with new brain competencies that can enhance our capacity for being intentional as we meet the complexities of our world. We can be deliberate about our actions; we can choose to act conscientiously, and we can strengthen those capacities within us that make it possible for us to engage with our world with fundamental integrity.

We can make good trouble, and we should; we shouldn’t wait.

It is important to remember why we are really here, which is not about “mindfulness washing” or “wisdom washing” in order to look like we are super aware and altruistic as opposed to being genuinely mindful and ethical. I believe that it is imperative that we strengthen the conditions that make our actual motivations visible, and deliberately cultivate an intention that is free of self-interest and fundamentally nontransactional. This involves sensitivity to what is present, the capacity to perceive present circumstances clearly, and the will and wisdom to consider the deeper downstream effects, or what we call in Buddhism, karma.

We so often see that personal preferences, self-centeredness, greed, fear, and distractibility distort our perception of reality, and this influences our values, motivations, intentions, and behaviors. From the Buddhist perspective, intention is a mental factor that is directional and deliberate. Our motivation, on the other hand, might not be fully conscious to us. To put it simply, our so-called good intentions can be driven by unconscious, ego-based, self-interested motivations, and lack fundamental integrity. A related issue is understanding that our unconscious motivations can be the cause of preconscious moral suffering, including moral injury, or a sense of subtle but pernicious shame or deep regret. 

Please understand, I am just a Buddhist teacher, who right now is inundated with AI articles predicting doom or liberation. Perhaps none of us, including the developers of AI, can know fully what the downstream effects of AI will be. But we do know that the velocity of the development of AI is stupefying, and the opinions are numerous regarding this powerful tool that has our socially and culturally biased intelligence woven into it while seemingly lacking any real wisdom.

It could make a significant difference if both developers and consumers approached the development of AI with the attitude of the bodhisattva, with vigilance and conscientiousness (appamada), being deliberately free of capitalistic self-interest and hedonic curiosity. We also have to become more discerning and responsible about what and how information is delivered, and, as consumers of information, have the awareness to recognize that there are attempts to spin us, manipulate us, undermine us, and hijack our fundamental faith and confidence. 

As it stands now, the capacity to resist the harms of AI is mostly accessible to the more privileged among us. From the point of view of socially engaged Buddhism, privilege confers responsibility, responsibility to raise up the so-called less privileged, but more importantly to deconstruct the very systems that confer privilege. 

In the end, whether AI is life-affirming or life-denying depends on all of us. To paraphrase Thich Nhat Hanh, we, as developers of AI and consumers of AI, should engage the soft technologies of self-correcting practice and developing healthy, transparent communities as the sane instruments with which we experiment with truth. Both sides of this equation need to actualize appamada—that is vigilance, conscientiousness, and care—in how we deploy and use these emergent technologies. If we are in an institution that is developing AI, we can bring “wise friction” to these institutions to decenter and decolonize AI’s embeddedness in a Eurocentric capitalistic worldview. We can also advocate, with indigenous consent, for diverse knowledge systems to be a fundamental part of the AI landscape, as per the work of Sabelo Mhlambi. We can legally call for the decommodification of our very identities. We can further commit to disrupting the systems that foster structural violence reflected in AI systems. In other words: We can make good trouble, and we should; we shouldn’t wait. Ultimately, can we foster an AI landscape that is not only seemingly rational but also genuinely relational and rehumanized, where dignity, human rights, and the rights of nature are valued. 

And we have to be unflinchingly honest as we embark on this profound journey of developing new forms of intelligence. Instead of leaving the fractured-trust-waste for later, let’s deal with it now. Facebook acted like big tobacco in its formative years, knowingly sowing social distrust and psychological trauma in young people, while denying it all the while. We can’t allow ourselves or our friends to act in such bad faith this time around.  

We should make sure that these big-brained mammals who are driving the development of AI form a CERN-like relationship of cooperation and ethics to ensure that AI is a positive contributor to a sane and possible future. And, most importantly, all of us must endeavor to live by the values and vows that relate to ending harm through technology. 

With gratitude to Abeba Birhane, Randy Fernando, Sensei Kozan Palevsky, and Soren Gordhamer for their help in reviewing this piece. 

Thank you for subscribing to Tricycle! As a nonprofit, to keep Buddhist teachings and practices widely available.

This article is only for Subscribers!

Subscribe now to read this article and get immediate access to everything else.

Subscribe Now

Already a subscriber? .