The Power of Swarms Can Help Us Fight Cancer, Understand the Brain, and Predict the Future
- BY ED YONG
swarms in nature working from the bottom up
The problem was, before anyone could figure out how swarms formed, someone had to figure out how to do the observations.
“You were trying to look at all the parts and the complete parcel at the same time.”
Conway had built a model of emergence—the ability of his little black and white critters to self-organize into something new.
real-world animal swarms might arise the same way—not from top-down orders, mental templates of orderly flocks, or telepathic communication (as some biologists had seriously proposed). Complexity, as Aristotle suggested, could come from the bottom up.
In 2002 Couzin cracked open the software and focused on its essential trinity of attraction, repulsion, and alignment. Then he messed with it. With attraction and repulsion turned up and alignment turned off, his virtual swarm stayed loose and disordered. When Couzin upped the alignment, the swarm coalesced into a whirling doughnut, like a school of mackerel. When he increased the range over which alignment occurred even more, the doughnut disintegrated and all the elements pointed themselves in one direction and started moving together, like a flock of migrating birds. In other words, all these different shapes come from the same algorithms. “I began to view the simulations as an extension of my brain,” Couzin says. “By allowing the computer to help me think, I could develop my intuition of how these systems worked.”
Studying animal behavior “used to involve taking a notepad and writing, ‘The big gorilla hit the little gorilla,’ ” Vicsek says. “Now there’s a new era where you can collect data at millions of bits per second and then go to your computer and analyze it.”
move beyond just looking at how collectives form and begin to study what they can accomplish. What abilities do they gain?
Behavior like this is typically explained with the “many wrongs principle,” first proposed in 1964. Each shiner, the theory goes, makes an imperfect estimate about where to go, and the school, by interacting and staying together, averages these many slightly wrong estimations to get the best direction. You might recognize this concept by the term journalist James Surowiecki popularized: “the wisdom of crowds.”
But in the case of shiners, Couzin’s observations in the lab have shown that the theory is wrong. The school could not be pooling imperfect estimates, because the individuals don’t make estimates of where things are darker at all. Instead they obey a simple rule: Swim slower in shade. When a disorganized group of shiners hits a dark patch, fish on the edge decelerate and the entire group swivels into darkness. Once out of the light, all of them slow down and cluster together, like cars jamming on a highway. “That’s purely an emergent property,” Couzin says. “The sensing ability really happens only at the level of the collective.” In other words, none of the shiners are purposefully swimming toward anything. The crowd has no wisdom to cobble together.
All these similarities seem to point to a grand unified theory of the swarm—a fundamental ultra-calculus that unites the various strands of group behavior. In one paper, Vicsek and a colleague wondered whether there might be “some simple underlying laws of nature (such as, e.g., the principles of thermodynamics) that produce the whole variety of the observed phenomena.”
Couzin has considered the same thing. “Why are we seeing this again and again?” he says. “There’s got to be something deeper and more fundamental.” Biologists are used to convergent evolution, like the streamlining of dolphins and sharks or echolocation in bats and whales—animals from separate lineages have similar adaptations. But convergent evolution of algorithms? Either all these collectives came up with different behaviors that produce the same outcomes—head-butting bees, neighbor-watching starlings, light-dodging golden shiners—or some basic rules underlie everything and the behaviors are the bridge from the rules to the collective.
Building a successful robot swarm would show that the researchers have figured out something basic. Robot groups already exist, but most have sophisticated artificial intelligence or rely on orders from human operators or central computers. To Tamás Vicsek—the physicist who created those early flock simulations—that’s cheating. He’s trying to build quadcopters that flock like real birds, relying only on knowledge of their neighbors’ position, direction, and speed. Vicsek wants his quadcopters to chase down another drone, but so far he’s had little success. “If we just apply the simple rules developed by us and Iain, it doesn’t work,” Vicsek says. “They tend to overshoot their mark, because they do not slow down enough.”
one of the fundamental emergent properties of a flock is collision avoidance,
So far, the Belugas’ biggest obstacle has been engineering. The robots’ responses to commands are delayed. Small asymmetries in their hulls change the way each one moves. Ultimately, dealing with that messiness might be the key to taking the study of collectives to the next level.
aliveness as – not being able to controlness.. you can’t grab hold of perpetual beta.. even as a line of best fit.. without compromising – and thus – missing it
via Kevin – swarmwise by Rick Falkvinge: [https://falkvinge.net/files/2013/04/Swarmwise-2013-by-Rick-Falkvinge-v1.1-2013Sep01.pdf]
ended up just skimming because of things like ie:
the magic of a consensus circle
public consensus always oppresses someone(s)
people who invest their time and id in the swarm do so because they agree w the swarm on a fundamental social level. if the swarm re id’s itself, that will create a discomfort. even the aired idea of doing so will create sever discomfort among activists and cause a standstill and a hat to recruiting
say for instance you have a swarm focused on going ot mars, and all of a sudden you air the idea of repurposing the org to selling mayo instead.. a ridiculous ie to make a point, but the social and emotional effects will be very similar for the more credible repurposings – even those you think would make perfect sense
see.. ugh.. that can’t be us..
not picking on him or swarms.. just saying.. that can’t be us.. if we want to be eudaimoniative us
also thinking – intelligence isn’t our essence.. ie: swarm intelligence
then got a bit deeper while reading Kevin Carson‘s regulated state: