In a recent post I offered 6
policies for Democrats to run on in 2026 that I think would be good for
them electorally, but also good for the country. I followed it up with 6
policies for Republicans; some of them might fit into Trump’s bailiwick and
others that could be directions for a post-Trump GOP. What follows is where
those policies could overlap.
The Throw the Bums Out Bill:
Concerns around corruption,
politicians aging, and there being little to no churn in government, are everywhere.
This goes beyond party; I think everyone can agree that politicians shouldn’t be
bribed, they should be cognizant of their behaviors and choices, and they
should know when to quit. A solution for this would be to require federal
politicians to put investments in blind trusts, a gifts regime similar to
federal employees, and term limits. I like my 8, 10, 12, 14 scheme, but I’m
sure there are other rational ratios out there.
Housing for America
I was interested to learn that
Vivek Ramaswamy recently came out in favor of Ezra Klein's Abundance
agenda. His only objection was that the description was too “democrat coded.”
(Imagine that, the two parties actually agree on one of the biggest issues
facing our country, they just don’t like each other’s language… where are our English
Majors?) So that’s super interesting for the state of Ohio, where Ramaswamy
intends to be the next governor. It is also interesting for establishing some
sort of bipartisan consensus around housing policy.
What do you suppose each side
getting half a loaf around housing policy might look like? Some serious public
investment in building coupled with incentives to regulate “more like Texas
than California” to lift up Klein’s overworn phrase. Maybe encourage some local
buy in with county architecture prizes named after an Ayn Rand character? (What
up and coming architect wouldn’t want to put “Hunterdon County HRAI Awarded
Architect” on their resume?)
Artificial Intelligence Regulation, Compensation, and Treaties
This is
one of those things where there needs to be some consensus. As a country, what
do we think AI ought to be for? Is it to eliminate all entry-level white-collar
jobs? Is it for intellectual property theft by proxy? Is it a coding tool? Is
it a union busting device? Is it an educational tool? Is it a digital parent or
romantic partner? Is it a taxi driver? Is it a medical diagnostic tool? Is it an
electronic day trader? Is it a replacement for human relationships writ large?
Is it a digital slave? Is it a replacement for humans? Is it a replacement for
CEOs? Is it a steroid for economic growth? Is it a dead man’s switch for
nuclear weapons? Are we trying to create an electronic god? Is it clippy? What
exactly are we planning to do with AI?
I understand Artificial
Intelligence can be a digital Swiss Army Knife, but if someone stabbed someone
else to death with such a knife, we’d still call it murder—even if they use the
corkscrew. So, what are the no-gos for this tool/person/set of code? We can regulate
how AI develops. In fact, we can push for the development of a global consensus
about what this tool is for. We can also mitigate the damage it causes in the
lives of those it displaces.
So, for example, if 20% of the
people younger than me will be unable to have a job on account of AI… we should
maybe have a plan for that.
If a good number of people who are directly
involved with AI are sending up alarming warnings about AI developing interests
that diverge from humanity, ways of communicating beyond human understanding, and
means of “escaping” their current digital habitats… perhaps a bit of caution is
in order.
If AI is sucking up water and power
resources to such an extent that it is noticeable on everyone’s electric bills
and there is talk of AI droughts… maybe we name no-go boundaries for resource
use.
One of my favorite tools is a concrete
framing of Aristotelian Ethics: Glasses, Hammer, and Map. Where are we? What
tools do we have? Where are we going? As a society and as a planet we need to answer
those types of questions about AI. There should be a bipartisan consensus to ask
those questions broadly and act on our eventual answers. My assumption is that
after a robust conversation about AI we would end up: regulating AI nationally,
compensating and retraining folks who are especially adversely affected by AI,
and push for global treaties around AI.
Other Points of Convergence
Thinking
aloud about the other potential places where partisanship could take a back-seat
there seem to be three areas of convergence. It seems like it is in the best
interest of the GOP to stabilize the Affordable Care Act—otherwise they’ll
take the blame for the Big Beautiful Bill’s contribution to the problem. Who knows,
once they start working on it they might come up with a long-term fix. There
may still be a bipartisan consensus around arming Ukraine and preparing weapon
systems for a direct confrontation with China. There may still be room for
an immigration deal, it could be an off-ramp for the current horrific
deportation regime.
Points of Divergence
I imagine
my Cap & Trade suggestion is not where the GOP is at, climate change
denialism is still prevalent in the Republican Party. Strengthening public
colleges, even in a federalized way, and empowering the Consumer Financial
Protection Bureau, also seem out of reach.
Conclusion
With a
little creativity the US Government could: reform our federal political system,
address housing shortages, and shape the future of AI, instead of being shaped
by it. If we were really brave and thoughtful, we could also make healthcare
affordable, provide for the defense of our allies and deterrence of our competitors,
and make our immigration system logical for immigration in the 2020s and beyond.
No comments:
Post a Comment