Some Thoughts on AI and the Technological Imperative
The machines are here. We already have surgery without surgeons, engineering without engineers, accounting without accountants, and journalism without journalists. Does it need to be this way?
Today’s post was sparked by something I recently read in Brian Merchant’s excellent Blood In The Machine (BITM) Substack: “What’s really going on with AI and jobs?”
The essay is well-written, filled with interesting data, and revealing. I strongly suggest you go read it. Two points jumped out at me that I’d like to highlight. The first is the concept of “AI-washing.” Just as “green-washing” is the idea that corporations pretend to be environmentally conscientious by doing minimal and performative actions, “AI-washing” is a similar form of misdirection. Here’s Merchant’s definition of “AI-washing”: It’s “when ‘AI’ is deployed not so much as a technology functionally capable of replacing human labor... but as a logic and an ideological justification for management’s ulterior goals” [like firing people]. Executives firing thousands of employees right before Thanksgiving and Christmas will feel themselves somewhat absolved of pure heartlessness if they apologetically attribute the firings to the inevitable efficiencies of AI.
Spoiler: Merchant’s data shows that the job market is terrible right now, that AI replacing humans is only partially responsible, and that much “AI-washing” is occurring now, too. Although he doesn’t emphasize them, structural factors in the larger economy are also playing a role.
AI’s impact on the job market for professionals has long fascinated me. Here’s a tweet from 2018 that encapsulates how I’ve been thinking about this for a while
Why are we moving so inexorably towards this mechanized and dehumanized world? Multiple, interacting and dynamic reasons exist, but I’d like to focus on one strand: Technological Determinism and the Technological Imperative.
To put it simply and colloquially, technological determinism is the idea that any new technology determines its social integration (and not the other way around). Marshall McLuhan, for example, was a technological determinist because he argued that when you introduce radio broadcasting into any society, this “tribal drum,” [as a “hot medium”] will do essentially the same thing whether in Germany, Japan, U.S.A., Russia, or China. Radio fuses “the psyche and society into a single echo chamber” (a line I love so much, I used it as a title for an article I wrote) regardless of where on earth it’s introduced.
The technological imperative is related to technological determinism. It argues that any new technology commands its employment. The scholar David J. Rothman, in Beginnings Count: The Technological Imperative in American Healthcare (1997) explained the technological imperative in reference to kidney dialysis. You can read an abstract here, but my simplified version is that once the technology for kidney dialysis became perfected and widely reproducible, it became impossible to tell people experiencing renal failure that the machine was too expensive for them to use - and they’d have to die (“let nature takes its course”). Like the iron lung, kidney dialysis machines miraculously guaranteed life, and therefore had to be employed. No contrary moral or ethical alternative to their usage exists. It’s a famous moment in the history of medicine because the U.S. government (reluctantly) agreed that the technology must be used expansively to save people’s lives. Here’s how Rothman phrases it:
In effect to satisfy what was perceived to be middle-class needs, the federal government intervened. In 1972, Congress agreed to underwrite all the costs of treating end-stage kidney disease and to this day kidney failure is the only ailment with its own guaranteed funding stream.
If you build a dialysis machine that will save millions of lives, you cannot deny its usage for frivolous reasons, or charge outrageous sums for access. You must employ the technology because the technology itself commands its use - a circular logic that makes sense (in some cases).
I’ve been thinking about (and writing about) these ideas for a long time.
Since I’m using this Substack to occasionally circulate some of my (unpublished) work, here’s the first page of a conference paper proposal I wrote up about a decade ago that illustrates how I’ve been thinking about technological determinism in journalism for a while.
“Journalism Without Journalists: The Technological Imperative in Journalism History”
On a clear afternoon in the summer of 1990, WTVJ news reporter Bonnie Anderson was assigned to report live about an impending storm. There was one problem: the skies were clear and the storm had tracked away from Miami. Anderson’s news director, however, insisted that she open the evening newscast with a live transmission from the station’s new satellite truck. Her report lasted “under fifteen seconds.” “I looked stupid, the station looked stupid, and the viewers were not served,” she recalled. Anderson later complained that her experience symbolized a troubling trend in American journalism: new telecommunication technology was overriding responsible editorial decision-making. Her claim, however, is not new; it is echoed in memoirs throughout the history of American journalism. The theory that technology, by its very existence, compels utilization is called the technological imperative. This paper explores the history of the technological imperative in American broadcast journalism, and argues that the technological imperative is deeply intertwined with the existential challenges facing contemporary American newsworkers. The ultimate result of the technological imperative is a world where journalism is produced without journalists – a dystopian future that draws ever closer to the contemporary media environment.
The paper was rejected for the conference - c’est la vie. I guess circa 2014, the idea that journalists might soon be completely replaced by new technologies, and they were facing existential challenges, would’ve seemed overly alarmist? Or maybe historicizing a future problem was a bad fit for the conference theme? Who knows. I’ve long since given up on trying to decode the mystifying aspects of academia and peer review (but I think I’d enjoy writing up this paper for a conference or a journal if anybody would like to read it - H.L. Mencken famously despised how the telephone warped quality reporting early in his career*).
You can see where I’m headed with this… so I’m going to stop here. Perhaps I’ll revisit this theme in the future here in The Lint Trap. But I think it’s becoming clear that there’s a strong push right now, by some of the largest corporations on earth, to persuade the public, and governments, of the inevitability of Artificial Intelligence’s incorporation into all human endeavors. We’re being force-fed the conclusion that AI’s employment is a technological imperative. One example: The highly publicized finding that AI can detect early-stage cancers far better than doctors - and therefore to reject its usage is a moral transgression that will kill innocent people. What’s (intentionally) omitted from these discussions is the verifiable fact that the same technology has been proven to facilitate suicide, too. As of right now, it can help keep us alive and help to kill us. That’s the paradox, and that’s why it’s not imperative we all rush to use it in its current form.
This post is already too long (apologies). The bottom line is that once you understand technological determinism and the technological imperative, you need to question whether the positivists have always been proven correct. Did launching geosynchronous telecommunication satellites up in space improve American broadcast journalism? I’d venture a 1964 edition of “The Huntley-Brinkley Report” was far more intelligent, informative and better-written than a 2025 edition of the NBC Nightly News. But the 1960s show lacked the electrifying instanteous live capability of today’s newscasts. Some integral and definitional aspects of TV news were improved by new technologies, but others were lost.
That’s a major overlooked problem with our headlong rush into AI. We’re not quite sure yet what we’ll be losing - aside from “jobs” - when the gains are tallied and celebrated in the future. What we lose will comprise “structured absences” in public memory. No matter what occurs, there’ll be much purposeful forgetting of the world before AI.
My guess is that by the time we figure out what’s been lost, it’ll be too late.
***
*After I finished this post, a fascinating 1981 Paris Review interview with Gabriel García Márquez crossed my TL, where Márquez explained another way new technology warped journalism:
INTERVIEWER
How do you feel about using the tape recorder?
GABRIEL GARCÍA MÁRQUEZ
The problem is that the moment you know the interview is being taped, your attitude changes. In my case I immediately take a defensive attitude. As a journalist, I feel that we still haven’t learned how to use a tape recorder to do an interview. The best way, I feel, is to have a long conversation without the journalist taking any notes. Then afterward he should reminisce about the conversation and write it down as an impression of what he felt, not necessarily using the exact words expressed. Another useful method is to take notes and then interpret them with a certain loyalty to the person interviewed. What ticks you off about the tape recording everything is that it is not loyal to the person who is being interviewed, because it even records and remembers when you make an ass of yourself. That’s why when there is a tape recorder, I am conscious that I’m being interviewed; when there isn’t a tape recorder, I talk in an unconscious and completely natural way.



