alison on ai

via tweet from alison gopnik

Some recent AI thinking. Human caregiving as a solution to the AI alignment problem. We’ve  always had to create autonomous intelligent agents with goals that mesh with ours but may also be new and different – they’re called children.

Original Tweet:

notes/quotes from her article (jan 2023):

The obvious idea is to train the computer to recognize and understand human goals, and to make sure that they help humans to accomplish those goals. .t

imagine if we listened to the itch-in-8b-souls 1st thing everyday & used that data to connect us (tech as it could be.. ai as augmenting interconnectedness)

But as the social media example shows, we humans are often not very good at recognizing our own goals, and those goals are often contradictory.. So how could a computer figure out what we really want when we don’t know ourselves? ..

need 1st/most: means to undo our hierarchical listening to self/others/nature

ie: tech as it could be

A solution to these problems may come from an unexpected source. We humans already face the alignment problem, and we always have. We have always had to figure out how to create autonomous, intelligent beings who share our values and goals but can also change and even reject those values and goals. They are our children.   

but.. maté parenting law et al.. so not legit solution.. need global detox/re\set first

The human answer to this problem comes through an undervalued and overlooked kind of intelligence—*the intelligence of care. Caregivers somehow accomplish the task of producing new, intelligent, autonomous creatures. **They pass on the discoveries, goals, and values of previous generations. Yet they also provide children with a protected, nurturing environment that allows them to experiment and explore and to ***invent new goals and values to suit new circumstances. Developmental psychologists have demonstrated that both children and caregivers have sophisticated cognitive abilities that underpin this kind of cultural evolution—like *****“theory of mind” and “intuitive pedagogy.” These abilities have allowed human agents to change their “objective functions” over generations. They also have ensured that, by and large, those functions serve the interest of the whole human community (at least, so far).  

*oi.. steiner care to oppression law et al.. via *passing on non legit ness.. gotta get out of sea world first.. hari rat park law et al

***this isn’t legit space for inventing anything new.. if still in sea world.. all will be same song

*****oi oi oi .. so not true

Teachers and therapists must also figure out how to help students and patients formulate their own goals, while maintaining a difficult balance between guidance and autonomy. 

oi.. help? students and patients?.. why not just people..

hari rat park law law..

sans any form of m\a\p

The usual evolutionary and psychological accounts of morality, altruism, and cooperation, as well as most political and economic theories, depend on the idea of the social contract. In complex situations, we can get better outcomes for everybody if people trade off their own interests and those of other individual autonomous agents.

huge red flags.. bauwens contracts law.. marsh exchange law.. graeber exchange law.. et al

But this contractual model doesn’t apply naturally to care. *Care doesn’t require even implicit negotiation or reciprocity.  Indeed, it is often profoundly asymmetric—think of a father caring for his helpless infant, or a **teacher caring for a struggling student. Instead of trading off their own interests and those of another, the carer extends their own interests to include those of the other. Moreover, expanding values and interests in this way is a challenging cognitive task

oh my.. thinking sinclair perpetuation law dear alison

*yeah that

**not that.. not any form of m\a\p

Caregiving, *and the intelligence that goes with it, has always gotten much less intellectual and academic attention than it deserves. ..But the **morality of being a parent is about taking a creature who isn’t autonomous and can’t make their own decisions and turning them into one who can. 

*oi.. intellect ness et al

**oi.. need curiosity over decision making.. otherwise perpetuating structural violence

Paying more attention to the intelligence of care is important for lots of reasons. ..humans, like all parents, must figure out when to dictate and when to let go, and how to negotiate the delicate balance of guiding the AIs’ decisions and allowing them to decide for themselves.  

oi oi oi.. part of the cancer.. not part of the solution.. oi

This is science fiction, of course, but if genuinely intelligent and autonomous artificial agents ever do emerge, then we will have to figure out how to go beyond exploiting them for our own ends and getting them to accomplish our own goals. We will have to care for them and help them learn to create their own goals. Even now, we might help solve the alignment problem in AI by thinking about how we solve it in human relationships.. t

need to org around legit needs.. ai is just for augmenting (or hidden) interconnectedness.. via tech w/o judgment et al

mufleh humanity lawwe have seen advances in every aspect of our lives except our humanity– Luma Mufleh