david dalrymple

david dalrymple bw

found him via Niel Gershenfeld and the cba.

David A. Dalrymple is currently leading an independent biophysics project (nemaload below) in San Francisco, and he is a Research Affiliate of the Synthetic Neurobiology group at the MIT Media Lab. His ambition is to build an extraordinary research center like the historical Bell Labs or Xerox PARC.

nemaload

David was homeschooled from 1994 to 2000 (ages 3-8?).

He took his first class at UMBC in the fall of 2000 and graduated with B.S. degrees in Computer Science and Mathematics in the spring of 2005. During this time he also spoke at TED 11, took a summer at sea, and worked with Kurzweil Technologies to create the earliest prototypes of the KNFB Reader.

After working for a year as an independent consultant out of his parents’ basement, they let him move to Cambridge, Massachusetts in June 2006 to begin graduate studies at the MIT Media Lab. (As far as any of the administrators knew, nobody as young as 14 had ever entered an MIT graduate program in the past.)

In June 2008, David received his S.M. in Media Technology, with a thesis titled “Asynchronous Logic Automata,” and began the Ph.D. program in Media Arts and Sciences as a member of the Mind Machine Project under Marvin Minsky.

In the summer of 2010, David attended Singularity University at the NASA Ames Research Center in Mountain View, California, which inspired him to refocus from the world of computer architecture, programming language design, and artificial intelligence to the world of biophysics and neuroscience.

On the advice of the faculty, in 2011, David left the Media Lab Ph.D. program at MIT for the Biophysics Ph.D. program at Harvard.

In 2012, David went on leave from Harvard and signed a research grant agreement with the Thiel Foundation to pursue his research goals in an independent context.

He will soon be announcing the Laboratorium and Mesh projects.

__________________

The potential benefits of the NEMALOADNEMALOAD project include (not all inclusive, and links left out.. go to nemaload site for all that):

  1. Basic discoveries in neuroscience: No analysis of an entire nervous system at the single-neuron level has ever been performed, due to the large number of recent technologies required. ..
  2. Pushing the envelope of data-driven modeling: The interrogator provided by this project will provide access to a world of data with an unprecedented combination of richness and tractability… 
  3. Providing a foundation for uploading research: .. Even if the philosophical assumptions fail, and human immortality through uploading is fundamentally impossible, a human upload process would be of incalculable value in curing neurological and mental illnesses.If the transfer of human consciousness to digital substrate is indeed possible, it would fundamentally transform society, and if NEMALOADNEMALOAD is successful, I hope it inspires ethicists, philosophers, economists, sociologists and other humanities thinkers to take human uploading more seriously and help prepare our civilization for the possibility of its arrival.

______________________

find/follow David:

link twitter

link facebook

youtube

Github, ..

______________________

this isn’t (doesn’t need to be ) rare.

imagine – facilitating authenticity…

finding all the geniuses inside.. waiting to be seen/heard/loved.

ie:

jack et al

and imagine if we called – facilitating curiosity – school – in the city – as the day?

7 billion such researchers/entrepreneurs/happy people.. no?

____________

2016 edge question
@davidad
Research affiliate, MIT Media Lab
David
newly dominant approach, originally known as “neural networks,” is now branded “deep learning,” to emphasize a qualitative advance over the neural nets of the past. Its recent success is often attributed to the availability of larger datasets and more powerful computing systems,
imagining deep learning w/deep datasets… (small-world networks that matter – per choice – per whimsy)
with 7 billion people.
everyday. as the day.
[..]
So what is the magic that separates deep learning from the rest, and can crack problems for which no group of humans has ever been able to program a solution? The first ingredient, from the early days of neural nets, is a timeless algorithm, rediscovered again and again, known in this field as “back-propagation”. It’s really just the chain rule—a simple calculus trick—applied in a very elegant way. It’s a deep integration of continuous and discrete math, enabling complex families of potential solutions to be autonomously improved with vector calculus.
[..]
The key is to organize the template of potential solutions as a directed graph (e.g., from a photo to a generated caption, with many nodes in between). Traversing this graph in reverse enables the algorithm to automatically compute a “gradient vector,” which directs the search for increasingly better solutions. You have to squint at most modern deep learning techniques to see any structural similarity to traditional neural networks, but behind the scenes, this back-propagation algorithm is crucial to both old and new architectures.
is that not locking us in .. to an agenda.. even if it’s ours..? ie: the need to change your mind everyday..
also thinking England‘s – can’t go in reverse if alive
The other key piece of magic in every modern architecture is another deceptively simple idea: components of a network can be used in more than one place at the same time.
[..]

Many of the most successful architectures of the past couple years reuse components in exactly the same patterns of composition generated by common “higher-order functions” in functional programming. This suggests that other well-known operators from functional programming might be a good source of ideas for deep learning architectures.

great if fractal ing… not so great if perptuate\ing

so begs datasets be.. deeper.. deep enough

imagine…. deep enough for all of us.. ie: self talk as data… app/chip ness

[..]

The most natural playground for exploring functional structures trained as deep learning networks would be a new language that can run back-propagation directly on functional programs. ….Grefenstette et al. recently published differentiable constructions of a few simple data structures (stack, queue, and deque), which suggests that further differentiable implementations are probably just a matter of clever math. Further work in this area may open up a new programming paradigm—differentiable programming. Writing a program in such a language would be like sketching a functional structure with the details left to the optimizer; the language would use back-propagation to automatically learn the details according to an objective for the whole program

a new language.. or 7 bill idiosyncratic jargon/languages.. stacked et al.. the not knowing keeping us alive.. the having to get to know to know.. keeping us human/kind/antifragilite
Advertisements