Part II: Building a Second Brain — Neurons to Neuralink

by | Mar 4, 2020 | General Information, Neuroscience | 11 comments

[responsivevoice_button voice="US English Female" rate="1" pitch="1" volume="0.8" buttontext="Listen to Article"]


How our brain has evolved in consuming and retrieving information. 


Recently, I was reading about the day in the life of celebrities. Don’t ask me why. It led me to reach two conclusions: first, I wouldn’t ever want to be a celebrity’s assistant, and second and more importantly, the biggest asset at the hands of celebrities is not money or status, rather time and focus.

‘These Highly Successful People — let’s call them HSPs — have many of the daily distractions of life handled for them, allowing them to devote all of their attention to whatever is immediately before them. They seem to live completely in the moment. Their staff handle correspondence, make appointments, interrupt those appointments when a more important one is waiting, and help to plan their days for maximum efficiency (including naps!). Their bills are paid on time, their car is serviced when required, they’re given reminders of projects due, and their assistants send suitable gifts to the HSP’s loved ones on birthdays and anniversaries. Their ultimate prize if it all works? A Zen-like focus.’ — The Organized Mind

Every account I read reflected a rigidly packed day where the layer of people around the HSPs acted as an attentional filter, only bringing the most important piece of information into the focus of the celebrity. I don’t care for being rich, but I really care for being focused. Not all of us have a personal chef who switches the nutrition every ten days and attends to all our dietary needs. Hence in the absence of the swarm of people who act as a second brain, it is upon us to build our own.


Photo by NeONBRAND on Unsplash


But first, let’s dive into some fun (I swear) history of our cognition.

If you’re like me and don’t like reading the Chamber of Secrets before the Sorcerer’s Stone, I recommend reading the Part I of this three-part series.


Memory, Attention, Retrieval 

If the human mind were a four-story house at the beginning of time, evolution is akin to the Super who attends to the problems that the house faced over the span of 2.5 million odd years. Instead of overhauling the entire house every time a storm hit or the temperature dropped too low, the Super resorted to a band-aid style approach and patched the areas that needed attention. With time, many of the rooms in the houses weren’t needed as much as others, but it stayed on.

Memory is no exception here.

Several representational systems built up during evolution, with each new system adding to those inherited from earlier ancestors. These systems arose for the same fundamental reason: to transcend problems and exploit opportunities encountered by specific ancestors at particular times and places in the distant past. In effect, modern memory emerged via evolutionary accretion. — Oxford University Press


Photo Courtesy of OUP


When discussing memory, a question that pops up often in our minds is our brain’s capacity. Scientific American released an article in 2010 where a number was assigned to this capacity: 2.5 petabytes. For those who don’t work with bytes everyday, it goes kilobytes → megabytes → gigabytes → terabytes, and then, → petabytes. To put that in perspective, back in the days when Yahoo served over 500 million monthly website visitors, their data-warehouse capacity was 2 petabytes, 20% lesser still than the capacity we hold in our three-pound jelly of a brain. It’s also worth refreshing your memory of the fact that the computer on-board the Apollo spacecraft had an operating system with a memory of 64 kilobytes. So, can we ever justify the statement, my brain is full? 

The answer isn’t simple. Sorry.

In 1968, Richard Atkinson and Richard Shiffrin — both specializing in the field of cognitive science — proposed the now well-acclaimed multi-store memory model. They proposed that our memory is made up of not one, or two, but three components: a) a sensory register, b) short-term store, and c) long-term store. Let me try to paint a picture: imagine a small 8×8 ft room attached to the front door of a giant mansion. People flock to the small room carrying valuable information packages from all the neighboring villages. A guard standing at the door needs to inspect the contents of the packages before letting someone in. As more people from the same village carrying similar packages enter the small room, they are finally let into the mansion and given a permanent home. The poor chaps who don’t receive the backup are let out.

The sensory register functions here as the people carrying information from the various senses (visual, auditory, etc), and the guard as the attentional filter, who will then let this information to be stored temporarily in the small room, i.e. the short-term memory. And there is a limit to this.

In an influential paper titled “The Magical Number Seven, Plus or Minus Two,” psychologist George Miller suggested that people can store between five and nine items in short-term memory. More recent research suggests that people are capable of storing approximately four chunks or pieces of information in short-term memory. — Very Well Mind

This information stored in the short-term memory is then retained for a period of 20–30 seconds, before either getting discarded or transferred to the long-term memory by a process of rehearsal, not unlike listening to the same song on repeat (and later wishing you could go back in time and un-hear it).

Although the work of Atkinson and Shiffrin is widely regarded as the impetus for further research in memory architecture, it was and still is criticized on various fronts. As the decades rolled over, these criticisms became appended as newer and more accurate versions to the model they proposed, as research always does. However, despite the criticism, there are two findings that still mostly stand true: a) there is a distinction between short- and long-term memory and b) our short-term memory is limited while long-term memory can be considered to be limitless.

A limited capacity working memory is a central concept in cognitive psychology. Nevertheless, capacity limitations only apply when dealing with new, not old information. When dealing with previously learned material, the only discernible limit on working memory is the amount that has been learned and stored in long-term memory. — Psychology of Learning and Motivation


Photo Courtesy of Simple Psychology


In our analogy above, not every package pass the guard because, like our short-term memory, our attentional filter also has a limited capacity. We have a threshold on the processing power of our brain. This number was estimated by the psychologist Mihaly Csikszentmihalyi (and independently by a Bell Labs engineer named Robert Lucky) as 120 bits per second. If you believe that sounds like a lot, it’s not. When someone is talking to you, you process 60 bits per second trying to understand their words. So 120 bits signifies that you can hardly understand two people talking to you at once (and I’ve tried and tested this by hopping on two meeting calls at once).

In fact, there is a term to describe just how blind we could be to things that are right in front of us, that don’t catch our attention: inattentional blindness (not very original, I agree). Do you want to know if you suffer from it? Just take this two minute test (and comment the result below!).

So the phrase paying attention is in fact not as metaphorical as you thought it was. Every time a notification on your phone or a mention of your name catches your attention, it is at the expense of something else. Unfortunately, our brain is not great at distinguishing between what’s trivial and what’s critical. For anything to be stored in our short-term memory, it has to either catch our attention or we need to will ourselves to noticing it. Once that happens, some of it gets transferred to the long-term memory and stays there permanently.

Now, I’ve mentioned that information is stored permanently in long-term memory thrice. So a natural question you might have is: with a limitless long-term memory, why am I not able to retrieve information as I please then? Answer: Retrieval failure. This is where we differ most from a computer. While we beat the machine when it comes to making associations, the machines crush us in retrieving information flawlessly. A search on the number of states in the U.S. takes as much time as a search on the number of mountain gorillas in the Virunga National Park in Rwanda (answer’s 480).

If you have a pen and paper with you, try drawing a penny. You’ve probably seen it a thousand times, but you have only observed it enough to distinguish it from a dime or a dollar. This lapse in retrieval is partly because we never encoded the information right and partly because we don’t have enough cues to retrieve it. Just yesterday, I was struggling to find this piece of fact that I read in a book on the psychology of charity. I couldn’t remember the name of the book but I could clearly visualize the area of the page where I had read it. Sadly, that wasn’t enough of a cue for me to retrieve it from my mind (I still haven’t forgiven myself).


Photo Courtesy: Similar contexts improve your recall

Is this the real life? 

The limited capacity of our attention, short-term memory, and the infallible retrieval system all lead to the same conclusion: we need an external, limitless, low latency, hopefully nicer looking, second brain. Elon Musk’s startup Neuralink goes the crazy extra mile to do this by linking our brain to a computer, thus achieving true symbiosis between the man and the machine. With Neuralink, his aim is to eliminate the existential crisis which he claims will arise when (not if) a robot gains human-like intelligence and the ability to think. 

There are a bunch of concepts in your head that then your brain has to try to compress into this incredibly low data rate called speech or typing. That’s what language is — your brain has executed a compression algorithm on thought, on concept transfer. And then it’s got to listen as well, and decompress what’s coming at it. And this is very lossy as well. So, then when you’re doing the decompression on those, trying to understand, you’re simultaneously trying to model the other person’s mind state to understand where they’re coming from, to recombine in your head what concepts they have in their head that they’re trying to communicate to you. If you have two brain interfaces, you could actually do an uncompressed direct conceptual communication with another person. — Elon Musk


Photo Courtesy of WBW


Thankfully, there are a dozen other companies trying to connect our brain to a computer with varying levels of success and for different reasons. Bryan Johnson invested $100 million of his own money in 2016 into his brain-child (pun intended) Kernel, hoping to create a neuro-prosthesis and extend human cognition, as the website states. Han Bicheng founded BrainCo in 2014 while doing his doctorate at Harvard to help students to focus better by monitoring their attention through brain waves. And there are many more working in this area to accomplish science-fiction like feats, ranging from enabling paraplegics to control devices with their mind to enabling gamers to play purely with their thoughts.

All that sounds surreal, and even scary (don’t tell me you’re not!). But the kicker is the timeline: firstly, no one has even an approximately certain one. It ranges between 10 years to 50+ years, both quoted by experts in the field. Secondly, even if lossless communication between our brain and a computer becomes possible, the early adopters would be those with a disability for whom spending thousands of dollars yields a commensurable return. Finally, even if the technology becomes available and is inexpensive, there will be significant challenges from an ethical and legal standpoint. But, I’m thrilled to see so many players fighting for this cause and want it to happen sooner than later.


The Immediate Need

The information around the world keeps increasing. Our ability to store and retrieve them have pretty much remained constant. But thankfully, the need to store them in our brain is decreasing thanks to the nifty technologies we have available today. Not just technology, even the way our world is designed helps shift some of the burden from our brain into the external world.

 A classic example of this is a well-designed door, something you might have heard of if you read Design of Everyday Things. We don’t need to remember if our bedroom door opens in or out because certain features of the door encode this information for us.


That’s what I call good design


Of course, this isn’t largely the case when it comes to the over 100,000 words we consume every day. It is upon us to monitor our mental diet and cut down the fat (social media scrolling, clickbait-y articles, etc). Most importantly, how do we consume good content and retain this for future retrieval?

That is what we’ll look at in the final part of this series, with the help of two tools. 


In Part I of this three-part series, we looked at information overload and in Part II, we looked at information consumption. By now, I hope you are convinced of the need to build a second brain. In Part III, I’ll tie it all together and discuss about the tools you can use to start building that. If you like keeping up with my explorations in the field of neuroscience and productivity, maybe consider subscribing below.