intro’d to Dan here (via Maha):
ℳąhą Bąℓi, PhD مها بالي (@Bali_Maha) tweeted at 5:43 AM – 9 Jun 2019 :
This is among the *best* critique of AI/analytics I have ever read @14prinsp @KateMfD @Czernie @gsiemens https://t.co/K8tWYtK8Ug(http://twitter.com/Bali_Maha/status/1137686601662959621?s=17)
article by @danmcquillan
Machine learning extends bureaucracy into the future;
or rather, it bureaucratises a probabilistic future and actualises it in the present.
too much ness
A human-in-the-loop is not a humanistic pushback
as that human is themselves subsumed by the institution-in-the-loop.
They (people’s councils) are a collective questioning
of the decisions that define the way the machines will make decisions,
by applying critical pedagogy and situated knowledge.
They constitute a different subjectivity –
iterative deliberation of consensus, done right,
is an antidote to bureaucracy and to the calculative iterations of machine learning
public consensus always oppresses someone(s)..
We need to develop a different order of ordering.
Instead of ways of organising that allow everyone to evade responsibility,
we need to reclaim our own agency through self-organisation..t
We need to think collectively about ways out of this mess,
learning from and with each other rather than relying on machine learning.
countering thoughtlessness with practices of collective care.
We can’t uninvent either AI or bureaucracy,
but we can choose to radically change both our modes of organisation
and our approach to computational learning.
3 min interview from 2018 [https://vimeo.com/262354951]
mainly because they are such an efficient mech for classification and targeting.. a mech in a mechanical sense.. algo’s are active at simultaneously classifying and acting upon that classification..t
is there such a thing as machine learning for the people..t
Dan McQuillan l Losing Your Voice l Meaning 2018 – 15 min – [https://www.youtube.com/watch?v=KtfkCIfgBaw]
1 min – we should ask why we want machines to listen out for signs of distress;
why go to all this trouble when we could do the listening ourselves
can we..? i don’t think we can .. not to every voice.. at least not till we get back in the sync of an undisturbed ecosystem.. so begs a mech to do that.. ie: tech as it could be.. with 2 convers as infra
2 min – machine listening offers the prospect of early intervention.. .beyond anything psychiatry could have previously imagined
machine learning’s pattern finding.. means it can used for prediction.. as thomas insel says ..digital smoke alarms for people w mental illness
alive people (whales back out of sea world) aren’t predict\able
3 min – ie’s spot depression
that’s too late – today we can listen earlier/deeper
5 min – it’s mathematically impossible to produce all around fairness
there are many diff mathematical ways to define fairness and you can’t satisfy them all at the same time
fairness.. can’t be defined.. always changing et al.. but we can satisfy equity at same time.. in fact.. it won’t work unless it is all at same time
6 min – .. w the net effect of automating ineq
9 min – we need to know how to defend against a therapeutic stasi
no defended ness needed.. if ie: gershenfeld something else law
11 min – seeking to be heard over the stentorian tones of the psychiatric establishment
13 min – what we need is a society where precarity, insecurity and austerity don’t fuel generalised distress..t
14 min – we should ask instead how our new forms of calculative cleverness
can be stitched into an empathic technics that breaks with machine learning as a mode of targeting..and wreathes computation with ways of caring..t
ℳąhą Bąℓi, PhD مها بالي (@Bali_Maha) tweeted at 10:02 AM on Sun, Jun 09, 2019:
“But we should ask why we want machines to listen out for signs of distress;
Why go to all this trouble when we could do the listening ourselves?” @danmcquillan via @openDemocracy
Cc @KateMfD @14prinsp @Czernie @MiaZamoraPhD @catherinecronin
this article is a transcript (plus some words – and many links) of the 15 min video above – on losing your voice
After my Ph.D in Experimental Particle Physics I worked with people with learning disabilities and as a mental health advocate, and founded Multikulti, a community-led multilingual website for asylum seekers & refugees. I attended the G8 protest in Genoa in 2001 and was one of 93 people who were beaten, disappeared & tortured by the police. While working at Amnesty International I created the Digital Directorate and led their delegation to the first UN Internet Governance Forum. I co-founded Social Innovation Camp which brought together ideas, people and digital tools to prototype solutions to social problems, and ran camps in different countries including Georgia, Armenia & Kyrgyzstan. More recently I co-founded Science for Change Kosovo, a youth-led air quality citizen science project based on critical pedagogy. I’m currently a Lecturer in Creative & Social Computing