Saturday, April 04, 2026

First Contact with the Resurrection

 


Because of the floating nature of Easter’s date
—it can share a day with April Fools, or Passover…
or, today, the fictional Star Trek holiday of “First Contact Day.”
This is the day when Humanity first meets the Vulcans
—our first experience of “Live Long and Prosper.”

            I bring this up, not because the Halversons are a Trekkie household (though we are)
—but because sci-fi tropes around alien contact, can help us hear the Gospel
—they can ensure we’ve not tamed the resurrection or tamed Easter
—after all, there is something startling,
something alien,
about the two Marys’ first contact with the resurrected Christ.

            The Marys go to the tomb, all hope lost
—in the Star Trek universe humanity is failing to pick up the pieces after a global nuclear war,
then the Vulcans come along,
and we’re not alone,
and that changes everything.

            There is an earthquake
—as terrifying and strange as
the unnatural phenomena in the movie “Close Encounters of the Third Kind”
—culminating in the “mashed potato scene.”

            This amazing strange new thing
—the appearance of the resurrected messiah
—is revealed to two women
—disempowered, unexpected, people
—the nobody is somebody,
God prefers the powerless to the powerful
—just like ET landing with Elliot,
a little boy and his friendship bridges the gap between the stars…
or even the movie Men in Black
—Will Smith is recruited along with Green Berets and members of Seal Team 6
—the elites scoff at a mere New York City Cop being up to the task,
but he’s the only one who gets to be the insider, privy to Aliens Among Us.

            The Angel of the Lord descends like the Man who Fell to Earth.
Like shape shifting aliens as varied as Odo or Borne,
no one can quite describe the Angel
—instead metaphors and similes must be used,
like lightening, as snow.”

Finally, the women rush and tell of this new and strange thing,
just as people did during the infamous panic caused by the radio broadcast of “The War of the Worlds”

 

            You see what I’m saying, right?
The Resurrection should startle us,
should leave us in a state of terror and jubilation!

Remember in CS Lewis’ The Lion, the Witch, and the Wardrobe someone asks if the Jesus figure, the Lion Aslan, is safe, and the response is, “Who said anything about safe? ’Course he isn’t safe. But he’s good. He’s the King, I tell you.”
So too encountering Jesus,
so too first contact with Resurrection!

 

Prayer

            Today’s Resurrection story is a close encounter, a first contact.
An otherworldly experience
—Earthquakes and strange figures descending from the sky,
an overwhelming message that elicit fear and joy…
A message that just must be shared.

            Contact with the risen Christ has a seismic impact,
both literally and metaphorically.
Like tectonic plates or high- and low-pressure systems
—smashing against one another
storms and earthquakes of all sorts re-sort the world
—when our world meets the God of Resurrection!

 

            Death meets Life
—these women show up at the tomb,
and find the birth of a new creation!
On the other side of Easter,
-Death is not a totalizing force!
-It is no longer a tool the mighty use to acquire and maintain their stranglehold on power.
-That imperial authority is snapped, when the Empire’s soldiers sleep like death
—nothing can separate us from this Resurrection God!

These women who witnessed Crucifixion and Burial,
witness Resurrection as well.
The worst coercive power in this world, Death,
is broken upon the Gospel
—the Good News buries the grave!

            Death, and all the rotten history each of us have had with it
—it gives way to Life!

 

            Earth meets Heaven
—our Imminent Frame
—the idea that only the measurable matters,
that which we can see is important,
is balanced out by
an angel balancing upon the stone
that once held tight Jesus’ tomb.

            “There are more things in heaven and earth, Horatio,
/ Than are dreamt of in your philosophy.”

There is room for the broad scope of our imaginations
and heavenward yearnings.

 

            The present meets the future
—The initial gospel these women carry out to the other disciples is:
“Go to Galilee”
that is, the sadness of the present moment,
the destruction, the despair
—it is a fog that will burn off in the morning sun,
it is temporary, there is a future, there is hope
—"there, in Galilee, they will see me.

            If you feel trapped today, hopeless,
like a period has been indelibly imprinted as an the end of your life
—know that God loves turning periods into commas,
take heart, trust that you can face tomorrow!

 

            Encountering Christ, First Contact with Resurrection
—these women respond with a mix of fear and joy
fear that is awe,
joy that will not quit.
Look!
In Christ you have life!
You have heaven!
You have a future!

            So too with us!
Life, heaven, a future! Wow!
Let’s offer songs of praise,
let’s shout with holy joy!
Our Glorious Resurrected Lord triumphs!
Alleluia, Alleluia, Alleluia!

Friday, April 03, 2026

Some Questions about AI in an Aristotelian Ethical Frame

 

To begin with here are two definitions of Artificial Intelligence from the ELCA’s Corporate Social Responsibility Issue Paper:

“AI is generally considered to be a discipline of computer science that is aimed at developing machines and systems that can carry out tasks considered to require human intelligence.”

“AI refers to the theory and development of computer systems that can perform tasks that typically require human intelligence, such as speech recognition, decision-making, and pattern identification. AI encompasses a broad spectrum of capabilities, from mimicking human actions and thought processes to acting and thinking rationally.” 

               What follows are some thoughts using my stripped-down version of Aristotelian Ethics—Glasses, Hammer, Map. This framework asks three basic questions: Where are we? What tools do we have? Where are we going?

 

Glasses—Where are we currently at as a society in relation to AI?

Congregational Use of AI:

What are legitimate things an ELCA congregation should use AI for? What church officer functions should AI augment, or even replace? What are the consequences for a congregation relationally, legally, ethically?

Preaching and AI:

The temptation to claim other people’s sermons as our own has been out there forever. With the advent of the internet finishing a sermon is always a google search away, if the pastor is not diligent and faithful. Now with AI, a few prompts can produce a completely “original” sermon.

With any sort of homiletic plagiarism, there are the questions of contextuality and authenticity, as well as the tinge of lying and theft. In this instance there is always the weirdness of a simulacrum of a preacher speaking to real people. What is alive? What is true? What parts of the testimony are the preacher’s own faith and their witness to the gospel?

Loneliness:

There is a whole cadre of people who use AI chat-bots as everything from: a boredom pacifier, confessor, substitute child or spouse and lover, to a sort of substitute god—an omnipresent omnipotent creature who cares even if no one else does. Meta offers AI friends and there is talk of feeding the memories of dead loved ones into AI as an artificial resurrection. What does the church say about these things? How do we sing a more beautiful song than the Sirens’ song of artificial companionship?

Education:

What’s already going on sounds like a dystopia to me. Whole academic cycles of AI writing college students’ papers and professors grading them using AI. How can AI help learning happen and how does it become an impediment?

Copyright:

              A while back I was informed of an incident where a Seminarian turned in an AI written bible study as if it was their own work. What got stranger still was the AI had done much the same, it had simply copied and pasted one of my bible studies that I put up on this blog and claimed it whole clothe as an AI created bible study. Imagine that, a giant multination company poured billions of dollars into a thinking machine, and all the machine could think to do was plagiarize little old me! This odd experience of mine can’t be an isolated incident. How ought our society manage AI’s acts of “borrowing” from actual living breathing humans?

Jobs:

              Recently Zillow laid off 25% of its employees, replacing them with AI. From what I’ve heard that is the tip of the iceberg. The numbers I see thrown around regularly is that about 20% of people younger than me will be unable to have a job on account of AI… we should maybe have a plan for that.

The Environment:

              It’s hard to imagine, but one of the selling points for AI was that it would be connected to electric grids and the like, and manage energy use in a way that would lead to conservation, reduction of CO2 emissions, and lower electric bills for everyone. So far that hasn’t happened. Instead, Google, who initially promised to be emissions free due to AI’s brilliance, has increased their emissions by 50% due to AI use. If AI is sucking up water and power resources to such an extent that it is noticeable on everyone’s electric bills, and there is talk of AI droughts… maybe we name no-go boundaries for resource use by these machines.

Deep Fakes:

              It is important to name that falsifying images of other people, and whole videos, is a violation of the 8th commandment. If I can not tell the difference between my neighbor saying something on a video chat and it coming from a digital doppelganger, that’s a problem; that’s a truth problem!

Built in Bias:

              There have been instances of hiring AI discriminating against women when hiring for engineering and other “technical” jobs and discriminating against men for nursing jobs. There have also been instance of security video monitoring AI flagging black people as shoplifters, even as they are in the act of paying for items. AI tends to take human biases and explode them into hard and fast laws coded in ones and zeros. Perhaps the Lutheran paradigm of Law and Gospel has something to say about the creation of Frankenstein Laws out of Dr. Frankenstein’s biases?

Plausible Deniability for Illegal Activity:

              AI has been used to skirt and break laws. For example, an insurance company denied 300,000 claims in a minute using AI. The particular denial of claim action was one that had to be analyzed and signed off by a doctor, the AI was not a doctor. Likewise, landlords have been caught using AI to collude about rent prices. Law enforcement agencies are hesitant to prosecute these types of cases because AI makes everything technical and complicated.

General Discomfort:

When trying to figure out the landscape of the AI world it is worth noticing that a good number of people who are directly involved with AI are sending up alarming warnings about AI developing interests that diverge from humanity’s, ways of communicating beyond human understanding, and means of “escaping” their current digital habitats… perhaps a bit of caution is in order.

In general, it is worth asking: Have we already reached a tipping point where we can’t go back due to national security concerns? If so, how did we allow this to happen?

 

Hammer—What tools do we have to deal with AI?

Halting all AI research:

Simply put, we could decide AI is an immoral and overly dangerous tool, and advocate for all companies to cease any further advancement of AI technology. The main push back to this idea is that less moral companies or countries will leapfrog those who do not use AI, and non-AI using countries, companies, and people will be left on the ash heap of history.

Install “throttles” on all AI:

              If one of the dangers is that AI will become uncontrollable by humans, why not install a kill switch, so AI doesn’t kill us?

Regulating AI nationally:

              What if AI companies had to be transparent about when AI was part of a process and reveal, at least in a general sense, what their algorithms were being trained on? What if they had to name who was responsible when AI hurts someone? What if there was a government agency that oversaw AI development and gamed out unintended consequences? What if companies had to offer human alternatives? What if we wrote laws that addressed how AI interacts with remote facial recognition, insurance and credit, child sexual abuse, deep fakes, artistic integrity, and copyright?

Compensation and retraining of workers:

              If AI is going to shrink our work force by 20%, what do we do with those people? How should workers who lose their jobs on account of AI be treated? What sort of jobs should they be doing? Should jobs no longer be something humans aspire to (and yet we know there is a dignity to labor)? Are we talking Universal Basic Income for the 30 to 60 million Americans who are going to be out of a job?

For that matter, how do we compensate people whose work was used to train AI? If big tech companies are going to claim authors' works as their own, lifting upwards of 70% of their work word for word, shouldn’t they be compensated for that?

Push for global treaties around AI

              If AI is the new nuclear power, and that includes weaponization of AI, shouldn’t existing international treaties take it into account? For example, might we want to ban fully autonomous military weapons?

For that matter, if AI is a potential threat (or boon!) to everyone on the globe, shouldn’t everyone on the globe have a say in our fate and future?

Carbon-neutral pledges:

              Some AI companies made carbon-neutral pledges around their AI work… and they’ve not kept them. Should those pledges be enforced somehow? Similarly, what if AI companies had to report their water use and net carbon emissions? How much does an AI data center damage our planet?

Transparency reports:

              What might it look like if there was a consumer protection website that described the ways different companies are using AI? For example, if my car insurance company was tracking my driving via AI derived data from facial recognition software, I might look for a new insurance company.

Human Rights impact assessments:

              What if we had concrete data on what AI is doing to human quality of life? What if we knew what targeted ads do to people’s behavior patterns? What if it was taboo for AI companies to work with authoritarian governments?

 

Map—What are our goals for AI?

              Because AI is versatile, ubiquitous, and in its infancy, now is the time to ask, what do we hope to do with AI? What are our goals for it? Where are we going with it? If there is no plan, anything is possible.

What is our goal for AI? Is it to eliminate all entry-level white-collar jobs? Is it for intellectual property theft by proxy? Is it a coding tool? Is it a union busting device? Is it an educational tool? Is it a digital parent or romantic partner? Is it a taxi driver? Is it a medical diagnostic tool? Is it an electronic day trader? Is it a replacement for human relationships writ large? Is it a digital slave? Is it a replacement for humans? Is it a replacement for CEOs? Is it a steroid for economic growth? Is it a dead man’s switch for nuclear weapons? Are we trying to create an electronic god? Is it clippy? What exactly are we planning to do with AI?

Are we creating an idol?

              In the crassest sense, AI can function like a god. We ask it questions as at Delphi; we conceptualize it as containing near infinite knowledge with an astonishingly long reach. In an emergency, or when we are at our wits end, we might turn to AI for a way out.

              More in keeping with our confessions, idols are those things that we put our trust in, that are not God. There are surely reasons to be in awe of AI, to appreciate AI, and find it reliable. How very dangerous that is!

Are we creating a human-ish entity?

              Perhaps we’re not shooting for heaven, but instead Eden. If AI is to be a silicon life form, not unlike a human being, there are some big questions we should be asking. Broadly speaking, where is the line between the co-creation that is a creative tending of the garden, and when are we clothing ourselves with naked vanity and eating the apple?

              Additionally, have we thought through what the existence of non-human non-biological people will mean to the dignity of being human beings? How will AI-people shape how we understand humans, will we look and see the image of God, or a caricature image of ourselves reflected back at us?

If it is a tool, what sort of tool, what sort of work?

              As with most tools, AI can be used for tremendous good or tremendous ill. Fire can cook a meal or burn down a village. Nuclear power can provide electricity to a whole city or obliterate that same city… or even the world.

Hopes:

              In a world with simply too much information, AI can be a tool to sift through it all. This could be a boon to interdisciplinary work, scientific research, the creation of new drugs. Perhaps it can streamline medical services, workplace efficiency, and energy grids. Access to healthcare and education could be transformed by AI.

Worries:

              If war is something that must always be mourned—as the ELCA’s Social Statement says—what happens when thinking machines make decisions about war? How might AI algorithms curtail freedom of thought and freedom of expression? Facial recognition software already has some sinister racial biases, that software is AI’s “eyes” so will these tools be racist? If AI can sift through so much information that it can track individuals, what will the use of these tools do to the right to privacy? How are we going to deal with Copyright when everything has been fed into the mind of AI? When AI makes mistakes and it threatens, or even takes, a human life, who is libel and who is responsible for fixing that in the future? What will we do about criminal use of AI?

In general, new tools always have social and cultural consequences. AI will have much the same. I don’t think we’re anywhere near ready for them.

 

Conclusion:

              Advances in AI are already way ahead of our society’s ability to come to grips with the technology. Most people look at the changes brought by AI that are already here and choose to simply brace themselves and looking for something to hold onto. We’re behaving as if AI is an unstoppable force as inevitable as the seasons.

There are ways to manage AI, everything from particular types of consumer or governmental reporting to international treaties to a luddite reaction of just pulling the plug on everything. Recently we’ve chosen no regulation of AI, going so far as to nullify state laws around AI. There will be consequences for that.

Because of a largely hands off, Laissez-faire, approach to AI, we’re very unclear about goals for AI. They are purposefully opaque. There are clearly amazing possibilities, but also the danger of creating monsters.