let’s go with particulars.
Data (/ˈdeɪtə/ day-tə or /ˈdætə/ da-tə, also /ˈdɑːtə/ dah-tə) is a set of values of qualitative or quantitative variables; restated, data are individual pieces of information. Data in computing (or data processing) are represented in a structure that is often tabular (represented by rows and columns), a tree (a set ofnodes with parent-children relationship), or a graph (a set of connected nodes). Data are typically the results of measurements and can be visualised using graphs or images.
Data as an abstract concept can be viewed as the lowest level of abstraction, from which information and then knowledge are derived.
Raw data, i.e., unprocessed data, refers to a collection of numbers, characters and is a relative term; data processing commonly occurs by stages, and the “processed data” from one stage may be considered the “raw data” of the next. Field data refers to raw data that is collected in an uncontrolled in situ environment. Experimental data refers to data that is generated within the context of a scientific investigation by observation and recording.
The word data is the traditional plural form of the now-archaic datum, neuter past participle of the Latin dare, “to give”, hence “something given”. In discussions of problems in geometry, mathematics, engineering, and so on, the terms givens and data are used interchangeably. This usage is the origin of data as a concept in computer science or data processing: data are accepted numbers, words, images, etc.
Data is also increasingly used in humanities (particularly in the growing digital humanities) the highly interpretive nature whereof might oppose the ethos of data as “given”. Peter Checkland introduced the term capta (from the Latin capere, “to take”) to distinguish between an immense number of possible data and a sub-set of them, to which attention is oriented. Johanna Drucker has argued that the humanities affirm knowledge production as “situated, partial, and constitutive” and that using data may therefore introduce assumptions that are counterproductive, for example that phenomena are discrete or observer-independent. The term capta, which emphasizes the act of observation as constitutive, is offered as an alternative to data for visual representations in the humanities.
qualitative, raw, given.
let’s try something different. let’s quit obsessing with data we’ve figured out how to cheat/scam/control.
quality of data (or whatever) matters little if our focus is on the wrong kind of data (or whatever).
let’s use data that matters.. to rewire ourselves to each other.
let’s use self talk.. as our data.
if output matters, input matters
Data, Data, Everywhere, but Who Gets to Interpret It? | EPIC https://t.co/zLXjSk6C2S
Original Tweet: https://twitter.com/FiddleNP/status/595658964232200192
Through this work we realized that there were in fact plenty of people who were interested in getting beyond the canned, fixed representations of data provided by apps makers, but were not necessarily interested in learning statistics or experimental science. As an anthropologist, I began to think about this disinterest as also insistence on re-valuing the situatedness of situated knowledge. That is, it is also a recognition that not all problems can be reduced to matters of scientific or positivist enquiry.[..]visualization tools that surface matters of concern, not matters of fact.[..]I cannot help but wonder how much contextual knowledge is sanitized away by those more comfortable making guesses about what others intended.
8/25/15 6:15 AM
The death of the theorist and the emergence of data and algorithms in digital social research. bit.ly/1f8HbPv
Computer code, software and algorithms have sunk deep into what Nigel Thrift has described as the “technological unconscious” of our contemporary “lifeworld,” and are fast becoming part of the everyday backdrop to Higher Education. Academic research across the natural, human and social sciences is increasingly mediated and augmented by computer coded technologies. This is perhaps most obvious in the natural sciences and in developments such as the vast human genome database. As Geoffrey Bowker has argued, such databases are increasingly viewed as a challenge to the idea of the scientific paper (with its theoretical framework, hypothesis and long-form argumentation) as the “end result” of science:
The ideal database should according to most practitioners be theory-neutral, but should serve as a common basis for a number of scientific disciplines to progress. … In this new and expanded process of scientific archiving, data must be reusable by scientists. It is not possible simply to enshrine one’s results in a paper; the scientist must lodge her data in a database that can be easily manipulated by other scientists.
The apparently theory-neutral techniques of sorting, ordering, classification and calculation associated with computer databases have become a key part of the infrastructures underpinning contemporary big science. The coding and databasing of the world does not, though, end with big science. It is becoming a major preoccupation in the social sciences and humanities too.
For some enthusiastic commentators, such as Chris Anderson of Wired magazine, big data and its associated algorithmic techniques are bringing about “the end of theory” and disciplinary expertise:
This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.
Social science appears to be escaping the academy. Instead of social scientists, the new experts of the social media environment, argue Viktor Mayer-Schonberger and Kenneth Cukier, are the “algorithmists” and big data analysts of Google, Facebook, Amazon, and of software and data analysis firms. Algorithmists are experts in the areas of computer science, mathematics, and statistics, as well as aspects of policy, law, economics and social research, who can undertake big data analyses and evaluations
Facebook, for example, has a Data Science Team that is responsible for “what Facebook knows” and can “apply math, programming skills, and social science to mine our data for insights that they hope will advance Facebook’s business and social science at large.” Run by Facebook’s “in-house sociologist,” the Data Science Team aims to mobilize Facebook’s massive data resources to “revolutionize” understanding of why people behave as they do.
Similar methods are being deployed in politics. The think tank Demos has recently established a Centre for the Analysis of Social Media (CASM), a research centre dedicated to “social media science” which seeks to “see society-in-motion” through big data, as its research director Carl Miller explains:
To cope with the new kinds of data that exist, we need to use new big data techniques that can cope with them: computer systems to marshal the deluges of data, algorithms to shape and mould the data as we want and ways of visualising the data to turn complexity into sense.
Underlying “social media science” is a belief that the behaviour of citizens can be analysed and understood as a kind of data to inform new policy ideas. The emergence of “policy labs” that work across the social scientific, technological and policy fields—such as the Public Services Innovations Lab at Nesta, New York’s Governance Lab, and Denmark’s MindLab—is further evidence of how social science expertise is diversifying. For think tanks and policy labs the political promise of using sophisticated algorithmic techniques to analyze and visualize big data is to make societies and populations visible with unprecedented fidelity in order to improve government intervention.
matters little (ability to handle complex data).. if its not really about us.. as alive people..
ie:not helping… but rather perpetuating toxicity..
Many researchers are optimistic about the synergies between digital and social research methods. Emerging methods deployed by digital social researchers such as Lev Manovich include the use of Twitter and blogs to document everyday activities, the mobilization of search engine analytics to reveal massive population trends and social behaviours over time, the analysis of Instagram images to detect cultural and social patterns, the study of social network formation on Facebook, and so on. These platforms enable the continuous generation of data about social life and make possible new forms of social data, analysis and visualization.
As David Beer reports, the kind of software that can crawl, mine, capture and scrape the web for data has the potential to be powerful in academic research. Social media aggregators, algorithmic database analytics and other forms of what might be termed “sociological software” have the capacity to see social patterns in huge quantities of data and to augment how we “see” and “know” ourselves and our societies. Sociological software offers us much greater empirical, analytical and argumentative potential.
Noortje Marres has suggested another way to think about the proliferation of new devices and formats for the documentation of social life. She argues that we need to acknowledge a “redistribution of social research” and to see social science as a “shared accomplishment” as the roles of social research are distributed between different actors. Such a redistribution of research would include human actors such as academic researchers, software developers, data analysts, commercial R&D labs, and bloggers and tweeters, but also a much broader set of actors such as databases, software, algorithms, platforms, and other digital devices, media and infrastructures that all contribute to the enactment of digital social research. The redistribution of research among these diverse actors, Marres argues, would entail a “remediation of methods” as social research is reshaped and refashioned through the use of devices and platforms.
The process of redistributing, remediating, or “reassembling social science methods” as Evelyn Ruppert, John Law and Mike Savage have articulated it, means recognizing that digital devices are both part of the material of social lives and part of the methodological apparatus required for knowing those lives too. As they elaborate,
digital devices are reworking and mediating not only social and other relations, but also the very assumptions of social science methods and how and what we know about those relations.
Likewise, in an article on “algorithms in the academy,” David Beer has shown that software algorithms are increasingly intervening in social research through the “algorithmic normalities” of SPSS, GoogleScholar, LexisNexis, as well as emerging social media and data analytics devices, which frame information and codify the social world in certain ways—shaping the objects of analysis.
so… we re getting more efficient at gathering blah bah ness.
not better at listening to us.
Digital devices are also reworking and mediating HE institutions and academic researchers’ professional identities and personal lives. Roger Burrows has argued that the work of researchers in universities is now subject to “metricization” from an assortment of measuring and calculating devices. These include bibliometrics, citation indices, workload models, transparent costing data, research and teaching quality assessments, and commercial university league tables, many increasingly enacted via code, software and algorithmic forms of power. As a result, Deborah Lupton suggests, an academic version of the “quantified self” is emerging: a professional identity based on quantified measures of output and impact.
Writing in an article titled “#MySubjectivation,” the philosophy researcher Gary Hall argues that today’s social media are constitutive of a particular emergent “epistemic environment.” The epistemic environment of “traditional” academic knowledge production was based on the Romantic view of single authorship and creative genius materialized in writing, long-form argumentation, and the publication of books. New social media infrastructures, however, are reshaping the epistemic environment of contemporary scholarly knowledge production.
In the emerging epistemic environment of HE, academics are increasingly encouraged to be self-entrepreneurial bloggers and tweeters, utilizing social media platforms and open access publishing environments to extend their networks, drive up citations, promote their professional profiles, and generate impact. Commercial social media platforms such as Twitter, Facebook, LinkedIn and Academia.edu are becoming part of the everyday networked infrastructure through which academics create, perform and circulate research, knowledge and theory. As Hall states it, the emerging epistemic environment:
invents us and our own knowledge work, philosophy and minds, as much as we invent it, by virtue of the way it modifies and homogenizes our thought and our behaviour through its media technologies.
To put it more bluntly, academics are becoming data, as mediated through complex coded infrastructures and devices. Geoffrey Bowker has written that “if you are not data, you don’t exist”; the same is true for academics in Higher Education. The unfolding effects of data and algorithms on HE ought to be the subject of serious social scientific inquiry.
no wonder it doesn’t matter to people if data is even us.
I like the awareness of the 1st para… but then goes on to expand without first waking people up.. aka: setting them free.. returning their turtle shell…
For researchers in Higher Education the task is to be open and alert to the current redistribution of research across these new infrastructures, devices, experts and organizations, and to recognize how our knowledge, theories and understandings of the social world we are studying are being mediated, augmented and even co-produced by software code and algorithmic power.
This is amazing. Subtle, powerful thinking about where you could apply pressure to logistics chains via the unions
@thejaymo @benvickers_ #stacktivism https://t.co/vC4Ge5Fgg4 this is really good stuff guys
Original Tweet: https://twitter.com/leashless/status/705897497407913985
big (only) thing i got out of article:
“Data is just a proxy for control.”
Céline (@krustelkram) tweeted at 3:11 AM on Tue, Jun 20, 2017:
Bodiless Brains or “How can we move from data to Dada?” https://t.co/YPolZ8kMAn
What’s collapsing right now is the imagination of a better life.
How can we move from data to *Dada and become a twenty-first-century avant-garde, one that truly understands the technological imperative and shows that “we are the social in social media”?
Dada (/ˈdɑːdɑː/) or Dadaism was an art movement of the European avant-garde in the early 20th century, with early centers in Zürich, Switzerland at the Cabaret Voltaire (circa 1916); New York Dadabegan circa 1915, and after 1920 Dada flourished in Paris. Developed in reaction to World War I, the Dada movement consisted of artists who rejected the logic, reason, and aestheticism of modern capitalist society, instead expressing nonsense, irrationality, and anti-bourgeois protest in their works. ..Dadaist artists expressed their discontent with violence, war, and nationalism, and maintained political affinities with the radical left
How do we develop, and then scale up, critical concepts and bring together politics and aesthetics in a way that speaks to the online millions?
let’s try this: short bp
Let’s identify the hurdles, knowing that it’s time to act. We know that making fun of the petty world of xenophobes isn’t working. What can we do other than coming together? Can we expect anything from the designer as lone wolf? How do we organize this type of political labor? Do we need even more tools that bring us together? Have you already used Meetup, Diaspora, DemocracyOS, and Loomio? Do we perhaps need a collective dating site for political activism? How can we design, and then mobilize, a collective networked desire that unites us in a “deep diversity”? Is the promise of open, distributed networks going to do the job or are you look for strong ties—with consequences?
mech simple enough et al
It’s time to reread Hannah Arendt’s The Origins of Totalitarianism (in which we find David Rousset’s famous quote:
“Normal men do not know that everything is possible”).
This means that the overall narrative will have to be robust (while “agile”).
Memes are designed to be jammed, yet the core message stays the same no matter how radically the meme is altered.
fractal thinking ness
We can also call this condensed semiotic unit a symbol, although the symbolic aspect of a meme often remains invisible.
We need to blast lasting holes in the self-evident infrastructure of the everyday.
In preparation for things to come, I asked a few people the perennial question: what is to be done?
We need to get to the heart of the matter, rather than attempting to deal with symptoms.
The challenge is creating bubble-breaking memes
Your content has to be so obscure and mysterious that it’s not working as a propaganda tool anymore.
My understanding is that memes are sort of a vessel or coordinating point for organization, but without themes they are largely lacking in ideological value. They are like a vocabulary, and need to be animated and organized by an imperative or narrative
According to Franco Berardi, *we need a new rhythm of elaboration; we need to **slow down sequentiality, heal from acceleration, and find a new tempo of movement. This cannot be realized through further acceleration. Real-time communication already ruins our bodies, our minds.
Michel Bauwens (@mbauwens) tweeted at 5:15 AM – 16 Aug 2018 :
RT @instigating: Right track! #Cities restoring #privacy & empowering #citizens with #data @nesta_uk https://t.co/BwWYb60bne (http://twitter.com/mbauwens/status/1030050319340851200?s=17)how 8 cities are empowering citizens w data
Martin Tisné (@martintisne) tweeted at 5:32 AM – 14 Dec 2018 :
The creation and consumption of data reflects how power is distributed in society. My article in @techreview on why we need a Bill of Data Rights just out https://t.co/znbKtS6BkQ and thank you to @mer__edith @JeniT and @azeem for great discussions that led to this (http://twitter.com/martintisne/status/1073556253115322368?s=17)Data ownership” does not fix existing problems: it creates new ones
we need a framework that gives people rights about how their data is used without requiring them to take ownership of it themselves..t
Consent, to put it bluntly, does not work.Data rights should protect privacy, and should account for the fact that privacy is not a reactive right to shield oneself from society..t
It is about freedom to develop the self away from commerce and away from governmental control..t
data rights are fundamentally about securing a space for individual freedom and agency while participating in modern society.
Bills of Data Rights should include rights like these:
- The right of the people to be secure against unreasonable surveillance shall not be violated.
- No person shall have their behavior surreptitiously manipulated.
- No person shall be unfairly discriminated against based on data.
These are *by no means all the provisions a durable and effective Bill of Data Rights would need. They are meant to be a beginning, and examples of the sort of **clarity and generality such a document needs..t
A new paradigm for understanding what data is—and what rights pertain to it—is urgently needed if we are to forge an equitable 21st-century polity..t
rather.. a deeper understanding of what data matters.. ie: self-talk
without a strong and vigorous data-rights infrastructure, open democratic society cannot survive..t
rather.. w/o a deep enough data infra.. humanity can’t thrive..
begs a mech to listen to all the voices everyday.. not measure/validate things .. (as it could be..)
Who needs democracy when you have data? – MIT Technology Review https://t.co/FTqH5qbk9S
Original Tweet: https://twitter.com/mbauwens/status/1126424037079621632
article from oct 2018
For any authoritarian regime, “there is a basic problem for the center of figuring out what’s going on at lower levels and across society,” says Deborah Seligsohn, a political scientist and China expert at Villanova University in Philadelphia. How do you effectively govern a country that’s home to one in five people on the planet, with an increasingly complex economy and society, if you don’t allow public debate, civil activism, and electoral feedback? How do you gather enough information to actually make decisions?
perhaps we don’t focus on decisions.. rather on curiosity
And how does a government that doesn’t invite its citizens to participate still engender trust and bend public behavior without putting police on every doorstep?
exactly.. but so too if all the govt is asking for is to participate in the givens..
“No government has a more ambitious and far-reaching plan to harness the power of data to change the way it governs than the Chinese government,” says Martin Chorzempa of the Peterson Institute for International Economics in Washington, DC
perhaps till now.. but we have the means to harness a diff/humane power now
relying on the wisdom of technology and data carries its own risks.
Data instead of dialogue
As Harvard historian Julian Gewirtz explains, “When the Chinese government saw that information technology was becoming a part of daily life, it realized it would have a powerful new tool for both gathering information and controlling culture, for making Chinese people more ‘modern’ and more ‘governable’—which have been perennial obsessions of the leadership.” Subsequent advances, including progress in AI and faster processors, have brought that vision closer.
(on using data to influence behavior and governance): The most far-reaching is the Social Credit System, though a better translation in English might be the “trust” or “reputation” system. The government plan, which covers both people and businesses, lists among its goals the “construction of sincerity in government affairs, commercial sincerity, and judicial credibility.”
What information is available is deeply flawed; systematic falsification of data on everything from GDP growth to hydropower use pervades Chinese government statistics.
deeper than that.. ie: data from whales in sea world