Wednesday, July 22, 2020

4 Algorithms We Borrowed From Nature

When you think about algorithms, you probably think of Google searches or YouTube recommendations, or predictive text situations where powerful computers are coming up with information you’re looking for. An algorithm, though, is basically any recipe of calculations that a computer can follow to produce a specific kind of information. And algorithms aren’t just for compute. 



They show up all over nature, too, in placeslike your immune system and in schools of fish. And just as engineers borrow ideas from nature’sphysical designs, some computer scientists look for inspiration in nature’s algorithms. Here are four ways our technology has improved thanks to algorithms we swiped from nature.

Say you’re looking for the perfect fuzzy animal photo to send as a virtual hug to your friend. An image search pulls up some cuddling kittensthat are almost right, if only you could find a slightly more zoomed-out version, What you want in this situation is somethingcalled nearest-neighbors search an algorithm that can quickly search a big database tofind the items most similar to one you specify.

That gets harder as the database gets bigger;and on the internet, there are way too many images for the search engine to compare yourphoto with every single one. So how do search engines pull off that featurethat gives you “visually similar images”? One technique is called locality-sensitive hashing.

This is a type of algorithm that digests eachimage into a short digital fingerprint called a hash, with similar hashes for similar inputs. For example, if your inputs were essays, adecent hash might be the first letters of the first twenty sentences. So if one essay was copied from another, their hashes would likely be very close.

This method makes it easy to find similar inputs. Instead of comparing your kittens to everyother image on the internet, Google can organize images by their hashes and just pull out thesimilar ones. The catch is that locality-sensitive hashingcan still be kind of slow, and sometimes inaccurate.

That is where fly brains come to the rescue. See, a fly can smell, but it doesn’t differentiate every subtle variation of odor; it groups odors into categories so it can learn that,for instance, cheese smells often lead to fruit, but book smells don’t. In 2017, a team of computer scientists andbiologists realized that fly brains group odors using a form of locality-sensitive hashing.

Except in the flies’ version, the brainboils a smell down to a few numbers by first expanding the smell data into a much larger collection of numbers. Only then does it select a few of those numbersas the hash. It’s sort of like expanding an essay byreplacing each character with a random 10-character code, producing a string of gibberish tentimes as long.

Then you could find the hundred gibberishwords that appear most frequently, take the first letter, and use that as the essay’shash. As strange as that strategy sounds, it turnsout to work really well. All the extra gobbledegook gives the algorithmmore opportunities to find patterns that jump out strongly for one cluster of inputs butare conspicuously absent for others.

When the computer scientists built their ownfly-based hashing algorithm, it was up to twice as accurate as traditional methods and also twenty times faster! Computer vision is everywhere. Self-driving cars, MRI technology, facial recognition; they all use it. Most of these systems need to do some formof object recognition meaning they need to identify the contents of an image. For decades, computer scientists used handcrafted algorithms to extract image features like edges and contiguous shapes.

Then, they could then build other algorithms that used those features to guess what was in each part of an image. But all these hand-tuned algorithms tend tobe fragile. It’s up to the cleverness of engineers todesign the right kinds of analysis and tweak the parameters just so.

Now, engineers are pretty clever, but there’sonly so much subtlety and detail they can code up. In the background, though, a different approachwas taking shape: convolutional neural networks, or CNNs. In artificial intelligence, most kinds ofneural networks are based on nature only in a crude way.

Like, they’re called neural networks because they kind of work like neurons. But convolutional neural networks are basedon Nobel Prize-winning research on cat brains. Back in the 1950s, a pair of neuroscientists discovered that some neurons in a cat’s visual cortex, called simple cells, would respond only to simple visual elements like a line in a specific place at a specific orientation.

Those simple cells pass information to so-called complex cells, which aggregate the information across a wider area. In other words, these researchers discovered a hierarchy in the brain’s visual processing: Earlier layers detect basic features at differentlocations, then later layers add all that together to detect more complex patterns.

That structure directly inspired the firstconvolutional neural networks. In the first layer of a CNN, each simulatedneuron looks only at one small patch of the image and checks how well that matches a simple template, like a spot of blue or an edge between light and dark.

The neuron gives the patch a score dependingon how well that patch matches the neuron’s template. Then, the next level looks at all the scoresfor edges and spots in a slightly bigger patch and matches them against a more complex template,and so on up the hierarchy until you’re looking for cat paws and bicycle wheels.

A CNN learns these templates automatically from data, saving engineers from manually specifying what to look for. Today, CNNs totally dominate computer vision. And although they now have bells and whistles that have nothing to do with the brain, the visual hierarchy is still baked in.

Next, companies really hate getting hacked. There are lawsuits and bad press, and it’spretty inconvenient for them and the people who rely on them. So if a company’s network starts gettinghammered with unusual traffic, it might be a good idea to lock things down. But detecting what counts as unusual trafficisn’t always easy.

It’s an example of what’s called anomaly detection, or scanning for atypical data, which can be tricky. See, you can’t just lay out rules for whatnormal traffic looks like. For one thing, what is normal is always changing. And anyway, hard rules would be too rigid:You wouldn’t want a red alert before every holiday just because a bunch of employeestraveled early and logged in from home.

It might be tempting to try supervised machine learning, where you show an algorithm lots of good and bad examples, and it figures outhow to tell them apart. But with anomaly detection, you often don’t have many examples of the bad stuff you’re trying to catch! Most of what a company has, of course, islogs of normal network traffic.

So how can it learn what abnormal traffic looks like? One particularly cool solution is based onour bodies. Because you know what’s really good at detectinga few bad guys in a sea of things that belong? Your immune system. To recognize and kill off invaders, your immunesystem uses cells called lymphocytes, which have little receptors that detect foreignproteins. But your body actually produces a huge variety of lymphocytes, with receptors that detect pretty much any random protein snippet includingbits of proteins that are supposed to be around.

You don’t want to attack those, so beforeyour body releases its lymphocytes, your thymus gland selectively kills off the ones thatwould detect familiar proteins. As a result, the only lymphocytes that surviveare ones that detect foreign proteins. This is called negative selection, and anomaly-detectionalgorithms can use a similar concept to spot unusual traffic.

They can generate detectors for random sequencesof traffic data, then delete any detectors that go off on normal traffic logs. The ones that remain, thus, respond only toabnormal patterns. Finally, in lots of situations, having multiplecomputers coordinate to divide up a task is crucial for example, to carry out a roboticsearch-and-rescue mission, or to index the entire internet. When you have just a few computers in a network,it’s easy to have one central command computer coordinate them all.

But if you’re coordinating hundreds of thousandsof machines, or the machines are cut off from one another, controlling them with one centralcomputer becomes impractical. So all those machines need a process thatthey all follow independently that somehow gets the job done efficiently and withouthorrible mistakes.

Little machines acting independently getting big projects done..  sounds… kind of like an insect colony! As it happens, there’s a whole niche ofwhat are called swarm intelligence algorithms that tackle problems like this, and many arebased on insect behavior.

For example, there are construction robotsthat collaborate by imitating termites. We still don’t know exactly how termitesbuild their massive mounds. But we do know that each worker can only seeits local environment what’s been built right there and where surrounding workersare. That means the only way for the termites tocoordinate is by leaving indirect signals for each other in their shared environment.

Like, when one termite does a bit of constructionwork, it leaves the soil arranged as some kind of cue to other termites about what needsto be done next. This indirect coordination strategy is calledstigmergy. Inspired by termite stigmergy, a system ofrobots called TERMES allows a fleet of little robots to build arbitrary structures withno central coordination.

Just by sensing what’s been built and followingsome basic traffic rules, each robot figures out what to do next to get closer to the targetstructure. The hope is for similar robots to one daybuild complex structures even in hostile environments, like war zones or on Mars, without dependingon a centralized controller.

Now, nature-inspired algorithms can get abit out of hand. People have designed algorithms based on wolfpack behavior, virus evolution, lightning paths, and on and on. Nature-inspired computing has been criticizedfor encouraging cute metaphors that don’t add insight or are unnecessarily complicated. But as you’ve seen, sometimes natural phenomenareally can make for great inspiration. Thanks For Reading This article  on tech with d. 

No comments:
Write comment