data defn


let’s go with particulars.

wikipedia small

Data (/ˈdeɪtə/ day-tə or /ˈdætə/ da-tə, also /ˈdɑːtə/ dah-tə) is a set of values of qualitative or quantitative variables; restated, data are individual pieces of information. Data in computing (or data processing) are represented in a structure that is often tabular (represented by rows and columns), a tree (a set ofnodes with parent-children relationship), or a graph (a set of connected nodes). Data are typically the results of measurements and can be visualised using graphs or images.

Data as an abstract concept can be viewed as the lowest level of abstraction, from which information and then knowledge are derived.

Raw data, i.e., unprocessed data, refers to a collection of numbers, characters and is a relative term; data processing commonly occurs by stages, and the “processed data” from one stage may be considered the “raw data” of the next. Field data refers to raw data that is collected in an uncontrolled in situ environment. Experimental data refers to data that is generated within the context of a scientific investigation by observation and recording.

The word data is the traditional plural form of the now-archaic datum, neuter past participle of the Latin dare, “to give”, hence “something given”. In discussions of problems in geometry, mathematics, engineering, and so on, the terms givens and data are used interchangeably. This usage is the origin of data as a concept in computer science or data processing: data are accepted numbers, words, images, etc.

Data is also increasingly used in humanities (particularly in the growing digital humanities) the highly interpretive nature whereof might oppose the ethos of data as “given”. Peter Checkland introduced the term capta (from the Latin capere, “to take”) to distinguish between an immense number of possible data and a sub-set of them, to which attention is oriented. Johanna Drucker has argued that the humanities affirm knowledge production as “situated, partial, and constitutive” and that using data may therefore introduce assumptions that are counterproductive, for example that phenomena are discrete or observer-independent. The term capta, which emphasizes the act of observation as constitutive, is offered as an alternative to data for visual representations in the humanities.

qualitative, raw, given.

let’s try something different. let’s quit obsessing with data we’ve figured out how to cheat/scam/control.

quality of data (or whatever) matters little if our focus is on the wrong kind of data (or whatever).

let’s use data that matters.. to rewire ourselves to each other.

let’s use self talk.. as our data.

self talk graphic 2


if output matters, input matters

self talk as data graphic

document everything

application ness

data & society

id cubed

stack ness

street light data

reality analysis


open data



Data, Data, Everywhere, but Who Gets to Interpret It? | EPIC

Original Tweet:

Through this work we realized that there were in fact plenty of people who were interested in getting beyond the canned, fixed representations of data provided by apps makers, but were not necessarily interested in learning statistics or experimental science. As an anthropologist, I began to think about this disinterest as also insistence on re-valuing the situatedness of situated knowledge. That is, it is also a recognition that not all problems can be reduced to matters of scientific or positivist enquiry.
visualization tools that surface matters of concern, not matters of fact.
I cannot help but wonder how much contextual knowledge is sanitized away by those more comfortable making guesses about what others intended.
LSE Impact Blog (@LSEImpactBlog)
8/25/15 6:15 AM
The death of the theorist and the emergence of data and algorithms in digital social research.
yeah .. if data is self talk. rev of everyday life.. over theory.
but doubt thats data they are talking about.
Computer code, software and algorithms have sunk deep into what Nigel Thrift has described as the “technological unconscious” of our contemporary “lifeworld,” and are fast becoming part of the everyday backdrop to Higher Education. Academic research across the natural, human and social sciences is increasingly mediated and augmented by computer coded technologies. This is perhaps most obvious in the natural sciences and in developments such as the vast human genome database. As Geoffrey Bowker has argued, such databases are increasingly viewed as a challenge to the idea of the scientific paper (with its theoretical framework, hypothesis and long-form argumentation) as the “end result” of science:
why of he and not simply.. everyday life..?
The ideal database should according to most practitioners be theory-neutral, but should serve as a common basis for a number of scientific disciplines to progress. … In this new and expanded process of scientific archiving, data must be reusable by scientists. It is not possible simply to enshrine one’s results in a paper; the scientist must lodge her data in a database that can be easily manipulated by other scientists.
The apparently theory-neutral techniques of sorting, ordering, classification and calculation associated with computer databases have become a key part of the infrastructures underpinning contemporary big science. The coding and databasing of the world does not, though, end with big science. It is becoming a major preoccupation in the social sciences and humanities too.
beyond science.. soc sci”. humanities…. no..?
all of us.
as the at.
otherwise.. just winding our madness (un us ness) tighter…

For some enthusiastic commentators, such as Chris Anderson of Wired magazine, big data and its associated algorithmic techniques are bringing about “the end of theory” and disciplinary expertise:

This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.


the point is they do it.
no detox?
no questioning/forgetting what we currently do..?
new everyday. as the day. detox ing us.
50 first dates et al
Social science appears to be escaping the academy. Instead of social scientists, the new experts of the social media environment, argue Viktor Mayer-Schonberger and Kenneth Cukier, are the “algorithmists” and big data analysts of Google, Facebook, Amazon, and of software and data analysis firms. Algorithmists are experts in the areas of computer science, mathematics, and statistics, as well as aspects of policy, law, economics and social research, who can undertake big data analyses and evaluations
so what if math.. stats.. et al… need detox as well…
rather than just gathering what we have… what we do now… up..
seems obvious there’s a problem.. let’s start there.. fresh..
leap frog to us…
like africans ( via articles) et al
deep/simple/open enough for all of us. today. everyday.
Facebook, for example, has a Data Science Team that is responsible for “what Facebook knows” and can “apply math, programming skills, and social science to mine our data for insights that they hope will advance Facebook’s business and social science at large.” Run by Facebook’s “in-house sociologist,” the Data Science Team aims to mobilize Facebook’s massive data resources to “revolutionize” understanding of why people behave as they do.
again… matters little if not really us.
let’s do this first… free artists

Similar methods are being deployed in politics. The think tank Demos has recently established a Centre for the Analysis of Social Media (CASM), a research centre dedicated to “social media science” which seeks to “see society-in-motion” through big data, as its research director Carl Miller explains:

To cope with the new kinds of data that exist, we need to use new big data techniques that can cope with them: computer systems to marshal the deluges of data, algorithms to shape and mould the data as we want and ways of visualising the data to turn complexity into sense.

Underlying “social media science” is a belief that the behaviour of citizens can be analysed and understood as a kind of data to inform new policy ideas. The emergence of “policy labs” that work across the social scientific, technological and policy fields—such as the Public Services Innovations Lab at Nesta, New York’s Governance Lab, and Denmark’s MindLab—is further evidence of how social science expertise is diversifying. For think tanks and policy labs the political promise of using sophisticated algorithmic techniques to analyze and visualize big data is to make societies and populations visible with unprecedented fidelity in order to improve government intervention.

matters little (ability to handle complex data).. if its not really about us.. as alive people..

ie:not helping… but rather perpetuating toxicity..

Many researchers are optimistic about the synergies between digital and social research methods. Emerging methods deployed by digital social researchers such as Lev Manovich include the use of Twitter and blogs to document everyday activities, the mobilization of search engine analytics to reveal massive population trends and social behaviours over time, the analysis of Instagram images to detect cultural and social patterns, the study of social network formation on Facebook, and so on. These platforms enable the continuous generation of data about social life and make possible new forms of social data, analysis and visualization.

everyday life of who…?
shell less turtles.
so.. our goal is better data/analysis/visualization.
imagine if it was simple: alive people. aka: free artists
As David Beer reports, the kind of software that can crawl, mine, capture and scrape the web for data has the potential to be powerful in academic research. Social media aggregators, algorithmic database analytics and other forms of what might be termed “sociological software” have the capacity to see social patterns in huge quantities of data and to augment how we “see” and “know” ourselves and our societies. Sociological software offers us much greater empirical, analytical and argumentative potential.
seeing our manufactured unnatural self…
what good is that.. except to keep the (toxic) cycle going..
what a waste of tech.
what a waste of us.
Noortje Marres has suggested another way to think about the proliferation of new devices and formats for the documentation of social life. She argues that we need to acknowledge a “redistribution of social research” and to see social science as a “shared accomplishment” as the roles of social research are distributed between different actors. Such a redistribution of research would include human actors such as academic researchers, software developers, data analysts, commercial R&D labs, and bloggers and tweeters, but also a much broader set of actors such as databases, software, algorithms, platforms, and other digital devices, media and infrastructures that all contribute to the enactment of digital social research. The redistribution of research among these diverse actors, Marres argues, would entail a “remediation of methods” as social research is reshaped and refashioned through the use of devices and platforms.
yes.. copying it all
more diversity/people involved.. does make life equitable/better..
if we don’t first question everything.. ie:systemic change would make many things we don’t question.. we see as given/ basic… irrelevant

The process of redistributing, remediating, or “reassembling social science methods” as  Evelyn Ruppert, John Law and Mike Savage have articulated it, means recognizing that digital devices are both part of the material of social lives and part of the methodological apparatus required for knowing those lives too. As they elaborate,

digital devices are reworking and mediating not only social and other relations, but also the very assumptions of social science methods and how and what we know about those relations.

Likewise, in an article on “algorithms in the academy,” David Beer has shown that software algorithms are increasingly intervening in social research through the “algorithmic normalities” of SPSS, GoogleScholar, LexisNexis, as well as emerging social media and data analytics devices, which frame information and codify the social world in certain ways—shaping the objects of analysis.

so… we re getting more efficient at gathering blah bah ness.

not better at listening to us.

Digital devices are also reworking and mediating HE institutions and academic researchers’ professional identities and personal lives. Roger Burrows has argued that the work of researchers in universities is now subject to “metricization” from an assortment of measuring and calculating devices. These include bibliometrics, citation indices, workload models, transparent costing data, research and teaching quality assessments, and commercial university league tables, many increasingly enacted via code, software and algorithmic forms of power. As a result, Deborah Lupton suggests, an academic version of the “quantified self” is emerging: a professional identity based on quantified measures of output and impact.
again… quantified something… but not self.. no us.. not alive people/public…

Writing in an article titled “#MySubjectivation,” the philosophy researcher Gary Hall argues that today’s social media are constitutive of a particular emergent “epistemic environment.” The epistemic environment of “traditional” academic knowledge production was based on the Romantic view of single authorship and creative genius materialized in writing, long-form argumentation, and the publication of books. New social media infrastructures, however, are reshaping the epistemic environment of contemporary scholarly knowledge production.

In the emerging epistemic environment of HE, academics are increasingly encouraged to be self-entrepreneurial bloggers and tweeters, utilizing social media platforms and open access publishing environments to extend their networks, drive up citations, promote their professional profiles, and generate impact. Commercial social media platforms such as Twitter, Facebook, LinkedIn and are becoming part of the everyday networked infrastructure through which academics create, perform and circulate research, knowledge and theory. As Hall states it, the emerging epistemic environment:

invents us and our own knowledge work, philosophy and minds, as much as we invent it, by virtue of the way it modifies and homogenizes our thought and our behaviour through its media technologies.

To put it more bluntly, academics are becoming data, as mediated through complex coded infrastructures and devices. Geoffrey Bowker has written that “if you are not data, you don’t exist”; the same is true for academics in Higher Education. The unfolding effects of data and algorithms on HE ought to be the subject of serious social scientific inquiry.



no wonder it doesn’t matter to people if data is even us.

I like the awareness of the 1st para… but then goes on to expand without first waking people up.. aka: setting them free..  returning their turtle shell…

For researchers in Higher Education the task is to be open and alert to the current redistribution of research across these new infrastructures, devices, experts and organizations, and to recognize how our knowledge, theories and understandings of the social world we are studying are being mediated, augmented and even co-produced by software code and algorithmic power.
that’s gone on for at least a couple hundred yrs..
used for us.. www ness can free us of that.. ongoingly…


Get every new post delivered to your Inbox.

Join 130 other followers