evgeny on ai take 2
this page/post from dec 2024.. earlier in july 2024 – was evgeny on ai
via evgeny tweet [https://x.com/evgenymorozov/status/1864580177679806869]:
The AI We Deserve – Critiques of artificial intelligence abound. Where’s the utopian vision for what it could be? – dec 2024 by Evgeny Morozov
With responses from Brian Eno, Audrey Tang, Terry Winograd, Bruce Schneier & Nathan Sanders, Sarah Myers West & Amba Kak, Wendy Liu, Edward Ongweso Jr., and Brian Merchant
For a technology that seemed to materialize out of thin air, generative AI has had a remarkable two-year rise. ..Generative AI is upending many an industry, and many people find it both shockingly powerful and shockingly helpful. In health care, AI systems now help doctors summarize patient records and suggest treatments, though they remain fallible and demand careful oversight. In creative fields, AI is producing everything from personalized marketing content to entire video game environments. Meanwhile, in education, AI-powered tools are simplifying dense academic texts and customizing learning materials to meet individual student needs.
In my own life, the new AI has reshaped the way I approach both everyday and professional tasks, but nowhere is the shift more striking than in language learning.
In over two decades of language study, I’ve never used a tool this powerful. It not only boosts my productivity but redefines efficiency itself—the core promises of generative AI. The scale and speed really are impressive. How else could I get sixty personalized stories, accompanied by hours of audio across six languages, delivered in just fifteen minutes—all while casually browsing the web? And the kicker? The whole app, which sits quietly on my laptop, took me less than a single afternoon to build, since ChatGPT coded it for me. Vergesslichkeit, au revoir!
wow.. says it right there.. all we’re looking for is efficiency?.. yet bolded quote:
Why should the world-historical promise of computing be confined to replicating bureaucratic rationality?..t
there’s a legit use of tech (nonjudgmental exponential labeling) to facil the seeming chaos of a global detox leap/dance.. the unconditional part of left-to-own-devices ness.. for (blank)’s sake.. and we’re missing it
But generative AI raises the stakes, reigniting debates about the broader relationship between technology and democracy.
we need much deeper/broader.. ie: relationship between tech and augmenting interconnectedness
any form of democratic admin as cancerous distraction
To close this gap, I want to offer a different way of thinking about AI and democracy. Instead of aligning with either the realists or the refuseniks, I propose a radically utopian question: If we could turn back the clock and shield computer scientists from the corrosive influence of the Cold War, what kind of more democratic, public-spirited, and less militaristic technological agenda might have emerged? That alternative vision—whether we call it “artificial intelligence” or something else—supplies a meaningful horizon against which to measure the promises and dangers of today’s developments.
ie: tech as it could be; ai as augmenting interconnectedness
In other words, they recognized that their models were just that: models of actually existing intelligence. The discipline of AI, by contrast, turned metaphor into reality. Its pioneers, largely mathematicians and logicians, had no grounding in biology or neuroscience. Instead, intelligence became defined by whatever could be replicated on a digital computer—and this has invariably meant pursuing a goal or solving a problem, even in the biologically inspired case of neural networks.
intellectness as cancerous distraction we can’t seem to let go of.. there’s a legit use of tech (nonjudgmental expo labeling).. to facil a legit global detox leap.. for (blank)’s sake.. and we’re missing it
This fixation on goal-driven problem solving ironically went uncriticized by some of AI’s earliest and most prominent philosophical critics—particularly Hubert Dreyfus, a Berkeley professor of philosophy and author of the influential book What Computers Can’t Do (1972). Drawing on Martin Heidegger’s reflections on hammering a nail in Being and Time, Dreyfus emphasized the difficulty of codifying the tacit knowledge embedded in human traditions and culture. Even the most routine tasks are deeply shaped by cultural context, Dreyfus contended; we do not follow fixed rules that can be formalized as explicit, universal guidelines.
what computers can’t do.. hubert dreyfus.. martin heidegger..
It took nearly a decade for Dreyfus’s Heideggerian critique to resonate within the AI community, but when it did, it led to *significant realignments. ..In the 1980s Winograd made a decisive turn away from replicating human intelligence. Instead, **he started focused on understanding human behavior and context, aiming to design tools that would amplify human intelligence rather than mimic it.
*we don’t need realignments (aka: same song) if root of problem is ie: hari rat park law.. we need a way out.. something we’ve not yet seen/tried.. ie: the unconditional part of left to own devices ness
**again.. hari rat park law… nothing makes any real diff until we get out of sea world
Grounded in principles of human-computer interaction and interaction design, this approach set a new intellectual agenda: Rather than striving to replicate human intelligence in machines, why not use machines to enhance human intelligence, allowing people to achieve their goals more efficiently?..t As faith in the grand promises of conventional AI began to wane, Winograd’s vision gained traction, drawing attention from future tech titans like Larry Page, Reid Hoffman, and Peter Thiel, who attended his classes.
oi.. again.. intellectness as cancerous distraction.. need tech as nonjudgmental expo labeling to connect us
larry page.. reid hoffman.. peter thiel
It helped streamline communication, but in ways that often aligned with managerial objectives, consolidating power rather than distributing
The deeper issue lay in the very notion of social coordination that Winograd and Flores were trying to facilitate.
coord we need: imagine if we listened to the itch-in-8b-souls 1st thing everyday & used that data to connect us (tech as it could be.. ai as augmenting interconnectedness)
Winograd, to his credit, proved far more self-reflexive than most in the AI community. In a talk in 1987, he observed striking parallels between symbolic AI—then dominated by rules-based programs that sought to replicate the judgment of professionals like doctors and lawyers—and Weberian bureaucracy. “The techniques of artificial intelligence,” he noted, “are to the mind what bureaucracy is to human social interaction.” Both thrive in environments stripped of ambiguity, emotion, and context—the very qualities often cast as opposites of the bureaucratic mindset.
As historian of technology Jonnie Penn points out, Herbert A. Simon is a prime example: after aiming to build a “science of public administration” in the 1940s, by the mid-1950s he had become a key player in building a “science of intelligence.” Both endeavors, despite acknowledging the limits of rationality, ultimately celebrated the same value: efficiency in achieving one’s goals. In short, their project was aimed at perfecting the ideal of instrumental reason.
rather.. in short – perpetuating the whac-a-mole-ing ness of sea world
Why should the world-historical promise of computing be confined to replicating bureaucratic rationality?
there it is.. let’s stick with that.. aka: everything we’ve done/seen/tried to date
Why should anyone outside these institutions accept such a narrow vision of the role that a promising new technology—the digital computer—could play in human life? Is this truly the limit of what these machines can offer? Shouldn’t science have been directed toward exploring how computers could serve citizens, civil society, and the public sphere writ large—not just by automating processes, but by simulating possibilities, by modeling alternate futures? And who, if anyone, was speaking up for these broader interests?
oooof.. that’s just more ie of replicating bureaucratic rationality
In fairness, it’s unsurprising they didn’t ask these questions. The Efficiency Lobby knew exactly what it wanted: streamlined operations, increased productivity, and tighter hierarchical control. The emerging AI paradigm promised all of that and more. Meanwhile, there was no organized opposition from citizens or social movements—no Humanity Lobby, so to speak—advocating for an alternative. Had there been one, what might this path have looked like?
there’s a legit use of tech (nonjudgmental expo labeling).. to facil the seeming chaos of a global detox leap.. for (blank)’s sake.. and we’re missing it
legit freedom will only happen if it’s all of us.. and in order to be all of us.. has to be sans any form of measuring, accounting, people telling other people what to do
(hans otto) Storm was a disciple and friend of the firebrand heterodox economist Thorstein Veblen. While Veblen is widely known for celebrating “workmanship” as the engineer’s antidote to capitalist excess, his thinking took a fascinating, even playful turn when he encountered the scientific world. There, probably influenced by his connections to the pragmatists, Veblen discovered a different force at work: what he called “idle curiosity,” a kind of purposeless purpose that drove scientific discovery..t This tension between directed and undirected thought would become crucial to Storm’s own theoretical innovations.
how we gather in a space is huge.. need to try spaces of permission where people have nothing to prove to facil curiosity over decision making.. because the finite set of choices of decision making is unmooring us.. keeping us from us..
ie: imagine if we listened to the itch-in-8b-souls 1st thing everyday & used that data to connect us (tech as it could be.. ai as augmenting interconnectedness)
The contrast with the design mode of instrumental reason could not be more pronounced. Eolithism posits no predefined problems to solve, no fixed goals to pursue..t Storm’s Stone Age flâneur stands in stark opposition to the kind of rationality on display in Cold War–era thought experiments like the prisoner’s dilemma—and is only better for it. The absence of predetermined goals broadens the flâneur’s capacity to see the world more richly, as the multiplicity of potential ends expands what counts as a means to achieve them.
aka: the unconditional part of left to own devices ness
in undisturbed ecosystem ..the avg individual.. left to its own devices.. behaves in ways that serve/stabilize the whole’ –dana meadows
we keep disturbing the ecosystem because we can’t seem to let go enough to see/try the unconditional part of left to own devices ness
This is Veblen’s idle curiosity at work. Separated from it, *design principles are fundamentally limited because they require fixed, predetermined goals and must eliminate diversity from both methods and materials, reducing their inherent value to merely serving those predetermined ends. Storm goes on to argue that efforts to apply design to **solve problems at scale, using the uniform methods of mass production, leave people yearning for vernacular, heterogeneous solutions that only eolithism can offer. Its spirit persists into modernity, embodied in unexpected figures—Storm identifies the junkman as the quintessential eolithic character.
*aka: finite set of choices of decision making is unmooring us law
**too conditional if goal is ‘problem solving’
What sets Storm apart from other thinkers who have explored similar intellectual territory—like Claude Lévi-Strauss with his notion of “bricolage” and Jean Piaget with his observations of children and their toys—is his refusal to treat the eolithic mindset as archaic or merely a phase for primitive societies or toddlers. This longing for the heterogeneous over the rigid is not something people or societies are expected to outgrow as they develop. Instead, it’s *a fundamental part of human experience that endures even in modernity. In fact, this striving might inform the very spirit—playful, idiosyncratic, vernacular, beyond the rigid plans and one-size-fits-all solutions—that some associate with postmodernity.
ie: graeber unpredictability/surprise law et al.. via idiosyncratic jargon ness of self-talk as data.. collected/heard/facil’d by tech as nonjudgmental expo labeling
Indeed, Storm argued that much of professional education carried an inherent anti-eolithic bias, lamenting that “good, immature eolithic craftsmen” were “urged to study engineering, only to find out, late and perhaps too late, that the ingenuity and fine economy which once captivated [them] are something which has to be *unlearned.” Yet, even in science and engineering, effective learning—especially in its early stages—succeeds by avoiding the algorithmic rigidities of the design mode. More often, it starts with what David Hawkins, a philosopher of education and one-time collaborator with Simon, called **“messing about.”
*ie: need global detox leap
**in the city.. as the day.. not so much about messing about ness.. because there is never nothing going on et al.. but about the unconditional part of left to own devices ness
In Storm’s terms, purposive action might itself emerge as the result of a series of eolithic impulses.
and agin.. already too conditional
What does any of this have to do with a utopian vision for AI? If we define intelligence purely as problem solving and goal achievement, perhaps not much. In Storm’s prehistoric idyll, there are no errands to be run, no great projects to be accomplished. His Stone Age wanderer, for all we know, might well be experiencing deep boredom—“thinking preferably about nothing at all,” as Storm suggests.
But can we really dismiss the moment when the flâneur suddenly notices the eolith—whether envisioning a use for it or simply finding it beautiful—as irrelevant to how we think about intelligence? If we do, what are we to make of the activities that we have long regarded as hallmarks of human reason: imagination, curiosity, originality? These may be of little interest to the Efficiency Lobby, but should they be dismissed by those who care about education, the arts, or a healthy democratic culture capable of exploring and debating alternative futures?
rather.. simply.. if they care about legit freedom
Storm points to child’s play as a prime example of eolithism.
1 yr to be 5 again.. not yet scrambled ness et al
With this, we have arrived at a picture of human intelligence than runs far beyond instrumental reason. We might call it, in contrast, ecological reason—a view of intelligence that stresses both indeterminacy and the interactive relationship between ourselves and our environments. Our life projects are unique, and it is through these individual projects that the many potential uses of “eoliths” emerge for each of us.
let’s just let go of trying to call/define everything.. too conditional.. language as control/enclosure
Yet just because formalization is off the table doesn’t mean ecological reason can’t be technologized in other ways. Perhaps the right question echoes one posed by Winograd four decades ago: rather than asking if AI tools can embody ecological reason, we should ask whether they can enhance its exercise by humans..t
yeah.. that.. augmenting interconnectedness via nonjudgmental expo labeling
Here too, ChatGPT resembles the Coordinator, much like our own capitalist postmodernity still resembles the welfare-warfare modernity that came before it. While the Coordinator enhanced the exercise of instrumental reason by the Organization Man, ChatGPT lets today’s neoliberal subject—part consumer, part entrepreneur—glimpse and even flirt, however briefly, with ecological reason. The apparent increase in human freedom conceals a deeper unfreedom; behind both stands the Efficiency Lobby, still in control. This is why our emancipation through such powerful technologies feels so truncated.
What’s the alternative? Any meaningful progress in moving away from instrumental reason requires an agenda that breaks ties with the Efficiency Lobby. These breaks must occur at a level far beyond everyday, communal, or even urban existence, necessitating national and possibly regional shifts in focus. While this has never been done in the United States—with the potential exception of certain elements of the New Deal, such as support for artists via the Federal Art Project—history abroad does offer some clues as to how it could happen.
In the early 1970s, Salvador Allende’s Chile aimed to empower workers by making them not just the owners but also the managers of key industries. In a highly volatile political climate that eventually led to a coup, Allende’s government sought to harness its scarce information technology to facilitate this transition. The system—known as Project Cybersyn—was meant to promote instrumental and technological reason, coupling the execution out of usual administrative tasks with deliberation on national, industry, and company-wide alternatives. Workers, now in managerial roles, would use visualization and statistical tools in the famous Operations Room to make informed decisions. The person who commissioned the project was none other than Fernando Flores, Allende’s minister and Winograd’s future collaborator.
Around the same time, a group of Argentinian scientists began their own efforts to use computers to spark discussions about potential national—and global—alternatives. The most prominent of these initiatives came from the Bariloche Foundation, which contested many of the geopolitical assumptions found in reports like 1972’s The Limits to Growth—particularly the notion that the underdeveloped Global South must make sacrifices to “save” the overdeveloped Global North.
Another pivotal figure in this intellectual milieu was Oscar Varsavsky, a talented scientist-turned-activist who championed what he called “normative planning.” Unlike the proponents of modernization theory, who wielded computers to project a singular, predetermined trajectory of economic and political progress, Varsavsky and his allies envisioned technology as a means to map diverse social trajectories—through a method they called “numerical experimentation”—to chart alternative styles of socioeconomic development. Among these, Varsavsky identified a spectrum including “hippie,” “authoritarian,” “company-centric,” “creative,” and “people-centric,” the latter two being his preferred models.
oi
Computer technology would thus empower citizens to explore the possibilities, consequences, and costs associated with each path, enabling them to select options that resonated with both their values and available resources. In this sense, information technology resembled the workshop of our eolithic flâneur: a space not for mere management or efficiency seeking, but for imagination, simulation, and experimentation.
each spinach or rock path.. aka: finite set of choices
The use of statistical software in modern participatory budgeting experiments—even if most of them are still limited to the local rather than national level—mirrors this same commitment: the goal is to use statistical tools to illuminate the consequences of different spending options and let citizens choose what they prefer. In both cases, the process is as much about improving what Paulo Freire called “problem posing”—allowing contesting definitions of problems to emerge by exposing it to public scrutiny and deliberation—as it is about problem solving.
still spinach or rock path.. aka: finite set of choices
The path to ecological reason is littered with failures to make this move. In the late 1960s, a group of tech eccentrics—many with ties to MIT—were inspired by Storm’s essay to create the privately funded Environmental Ecology Lab. Their goal was to explore how technology could enable action that wasn’t driven by problem solving or specific objectives. But as hippies, rebels, and antiwar activists, they had no interest in collaborating with the Efficiency Lobby, and they failed to take practical steps toward a political alternative.
ha
One young architecture professor connected to the lab’s founders, Nicholas Negroponte, didn’t share this aversion. Deeply influenced by their ideas, he went on to establish the MIT Media Lab—a space that celebrated playfulness through computers, despite its funding from corporate America and the Pentagon. In his 1970 book, The Architecture Machine: Toward a More Human Environment, Negroponte even cited Storm’s essay. But over time, this ethos of playfulness morphed into something more instrumental. Repackaged as “interactivity” or “smartness,” it became a selling point for the latest gadgets at the Consumer Electronics Show—far removed from the kind of craftsmanship and creativity Storm envisioned
nicholas negroponte et al
Similarly, as early as the 1970s, Seymour Papert—Negroponte’s colleague at MIT and another AI pioneer—recognized that the obsession with efficiency and instrumental reason was detrimental to computer culture at large. Worse, it alienated many young learners, making them fear the embodiment of that very reason: the computer. ..culminating in the ill-fated initiative to provide “one laptop per child.” Stripped of politics, it’s very easy for eolithism to morph into solutionism.
seymour papert et al
The Latin American examples give the lie to the “there’s no alternative” ideology of technological development in the Global North. In the early 1970s, this ideology was grounded in modernization theory; today, it’s rooted in neoliberalism. The result, however, is the same: a prohibition on imagining alternative institutional homes for these technologies. There’s immense value in demonstrating—through real-world prototypes and institutional reforms—*that untethering these tools from their market-driven development model is not only possible but beneficial for democracy, humanity, and the planet.
*means sans be sans any form of measuring, accounting, people telling other people what to do.. legit use of tech: nonjudgmental expo labeling
In practice, this would mean redirecting the eolithic potential of generative AI toward public, solidarity-based, and socialized infrastructural alternatives. As proud as I am of my little language app, I know there must be thousands of similar half-baked programs built in the same experimental spirit. While many in tech have profited from fragmenting the problem-solving capacities of individual language learners, there’s no reason we can’t reassemble them and push for less individualistic, more collective solutions. And this applies to many other domains.
But to stop here—enumerating ways to make LLMs less conducive to neoliberalism—would be shortsighted. It would wrongly suggest that statistical prediction tools are the only way to promote ecological reason. Surely there are far more technologies for fostering human intelligence than have been dreamt of by our prevailing philosophy..t We should turn ecological reason into a full-fledged research paradigm, asking what technology can do for humans—once we stop seeing them as little more than fleshy thermostats or missiles.
what we want/need are techs to foster/augment interconnectedness
ie: imagine if we listened to the itch-in-8b-souls 1st thing everyday & used that data to connect us (tech as it could be.. ai as augmenting interconnectedness)
As for the original puzzle—AI and democracy—the solution is straightforward. “Democratic AI” requires actual democracy, along with respect for the dignity, creativity, and intelligence of citizens. It’s not just about making today’s models more transparent or lowering their costs, nor can it be resolved by policy tweaks or technological innovation. The real challenge lies in cultivating the right Weltanschauung—this app does wonders!—grounded in ecological reason. On this score, the ability of AI to run ideological interference for the prevailing order, whether bureaucracy in its early days or the market today, poses the greatest threat.
Incidentally, it’s the American pragmatists who got closest to describing the operations of ecological reason. Had the early AI community paid any attention to John Dewey and his work on “embodied intelligence,” many false leads might have been avoided. One can only wonder what kind of AI—and AI critique—we could have had if its critics had looked to him rather than to Heidegger. But perhaps it’s not too late to still pursue that alternative path
yeah.. a legit alt.. not that (dewey et al) alt
______
______
______
- ai (up to date list on this page)
- ai as augmenting interconnectedness
- ai as nonjudgmental expo labeling
- alison on ai
- anne on ai
- ben on agi ness
- ben and joscha on conscious ai
- benjamin on ai
- cory on ai
- deb on ai
- ef on appropriate tech
- evgeny on ai
- evgeny on ai take 2
- george on ai
- jaron on ai
- jon on ai
- kevin (o) on ai
- monika on ai
- nicolas on ai
- noam on ai
- paul on ai
- peter on ai
- sam on ai
- tristan and aza on ai
_____
_____
_____
____


