Saturday, June 24, 2017

Testosterone Supplements: Overconfidence and Bad Judgment

New York Times:
“Does being over 40 make you feel like half the man you used to be?”

Ads like that have led to a surge in the number of men seeking to boost their testosterone. The Food and Drug Administration reports that prescriptions for testosterone supplements have risen to 2.3 million from 1.3 million in just four years.

There is such a condition as “low-T,” or hypogonadism, which can cause fatigue and diminished sex drive, and it becomes more common as men age. But according to a study published in JAMA Internal Medicine, half of the men taking prescription testosterone don’t have a deficiency. Many are just tired and want a lift. But they may not be doing themselves any favors. It turns out that the supplement isn’t entirely harmless: Neuroscientists are uncovering evidence suggesting that when men take testosterone, they make more impulsive — and often faulty — decisions.

Researchers have shown for years that men tend to be more confident about their intelligence and judgments than women, believing that solutions they’ve generated are better than they actually are. This hubris could be tied to testosterone levels, and new research by Gideon Nave, a cognitive neuroscientist at the University of Pennsylvania, along with Amos Nadler at Western University in Ontario, reveals that high testosterone can make it harder to see the flaws in one’s reasoning.

How might heightened testosterone lead to overconfidence? One possible explanation lies in the orbitofrontal cortex, a region just behind the eyes that’s essential for self-evaluation, decision making and impulse control. The neuroscientists Pranjal Mehta at the University of Oregon and Jennifer Beer at the University of Texas, Austin, have found that people with higher levels of testosterone have less activity in their orbitofrontal cortex. Studies show that when that part of the brain is less active, people tend to be overconfident in their reasoning abilities. It’s as though the orbitofrontal cortex is your internal editor, speaking up when there’s a potential problem with your work. Boost your testosterone and your editor goes reassuringly (but misleadingly) silent.

Men are also more likely to overestimate how well they’ll perform compared with their peers. Researchers at Kiel University in Germany and at Oxford gave a group of adults a test that assesses judgment and reasoning called the Cognitive Reflection Test, or C.R.T.

To see what the C.R.T. looks like, try answering this question: A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?

If you’re like most people, your first thought is that the ball costs 10 cents. But that is incorrect. If the ball costs $0.10, and the bat costs $1.00 more (or $1.10), then the total would be $1.20. So the ball costs 5 cents and the bat costs $1.05.

If you got this wrong, you’re not alone. Even at Ivy League schools such as Harvard and Princeton, less than 30 percent of students answer all the questions correctly. This is how the clever questions are designed. There’s an immediate, obvious answer that feels right but is actually wrong.

In the Kiel University study, both genders thought they’d scored higher on the test than they actually had. When asked to predict how others would fare, however, women expected other women to earn comparably high scores, but men thought they’d significantly outperform other men.

People don’t like to believe that they’re average. But compared with women, men tend to think they’re much better than average.

If you feel your judgment is right, are you interested in how others see the problem? Probably not. Nicholas D. Wright, a neuroscientist at the University of Birmingham in Britain, studies how fluctuations in testosterone shape one’s willingness to collaborate. Most testosterone researchers study men, for obvious reasons, but Dr. Wright and his team focus on women. They asked women to perform a challenging perceptual task: detecting where a fuzzy pattern had appeared on a busy computer screen. When women took oral testosterone, they were more likely to ignore the input of others, compared with women in the placebo condition. Amped up on testosterone, they relied more heavily on their own judgment, even when they were wrong.

The findings of the latest study, which have been presented at conferences and will be published in Psychological Science in January, offer more reasons to worry about testosterone supplements.

Dr. Nave and Dr. Nadler’s team asked 243 men in Southern California to slather gel onto their shoulders, arms and chest. Half of the men rubbed in a testosterone gel, and the rest rubbed in a placebo. Once the gel dried, they put on their shirts and went about their day.

Four and a half hours later, enough time for their testosterone levels to peak and stabilize, the men returned to the lab. They sat down at a computer and took several tests — a math test, a mood questionnaire and the C.R.T.

For the men with extra testosterone, their moods hadn’t changed much, but their ability to analyze carefully had. They were, on average, 35 percent more likely to make the intuitive mistake on the bat and ball question. They were also rushed in their bad judgment and gave incorrect answers faster than the men with normal testosterone levels, while taking longer to generate correct answers.

Some will shrug and say that making a mistake on a sneaky word problem isn’t a concern in daily life, but researchers are discovering that these reasoning errors could affect financial markets. A team of neuroeconomists, led by Dr. Nadler, along with Paul J. Zak at Claremont Graduate University, gave 140 male traders either testosterone gel or a placebo. The next day, the traders came back into the lab and participated in an asset trading simulation.

The results are disturbing. Men with boosted testosterone significantly overpriced assets compared with men who got the placebo, and they were slower to incorporate data about falling values into their trading decisions. In other words, they created a trading bubble that was slow to pop. (Fortunately, Dr. Nadler didn’t have these men participate in a real stock market, out of concern for what a single dose of this drug could do.)

History has long labeled women as unreliable and hysterical because of their hormones. Maybe now it’s time to start saying, “He’s just being hormonal.”

The research has its limitations. On average, men in these studies were in their early 20s, and a surge in testosterone might not impair older men’s reasoning in quite the same way. And of course this research doesn’t prove that all men are bad decision makers because of their testosterone or that they’re worse decision makers than women. Confidence can spur a person to action, to take risks. But we should all be more aware of when confidence tips into overconfidence, and testosterone supplements could encourage that. Ironically, these supplements might make someone feel bold enough to lead but probably reduce his ability to lead well.

The television ads promise youth and vigor, but they’ve left out the catch: Testosterone enhancement doesn’t just make you feel like an invincible 18-year-old. It makes you think like one, too.

More information:
» Testosterone Testing and Testosterone Replacement Therapy
» Should the Modern Man Be Taking Testosterone?
» Male Hormone Molds Women, Too, In Mind and Body

Friday, June 23, 2017

The Cognitive-Theoretic Model of the Universe

Christopher Michael Langan:
Those interested in serious theories include just about everyone, from engineers and stockbrokers to doctors, automobile mechanics and police detectives.  Practically anyone who gives advice, solves problems or builds things that function needs a serious theory from which to work.   But three groups who are especially interested in serious theories are scientists, mathematicians and philosophers.  These are the groups which place the strictest requirements on the theories they use and construct. 

While there are important similarities among the kinds of theories dealt with by scientists, mathematicians and philosophers, there are important differences as well.  The most important differences involve the subject matter of the theories.  Scientists like to base their theories on experiment and observation of the real world…not on perceptions themselves, but on what they regard as concrete “objects of the senses”.  That is, they like their theories to be empirical.  Mathematicians, on the other hand, like their theories to be essentially rational…to be based on logical inference regarding abstract mathematical objects existing in the mind, independently of the senses.  And philosophers like to pursue broad theories of reality aimed at relating these two kinds of object.  (This actually mandates a third kind of object, the infocognitive syntactic operator…but another time.)     

Of the three kinds of theory, by far the lion’s share of popular reportage is commanded by theories of science.  Unfortunately, this presents a problem.  For while science owes a huge debt to philosophy and mathematics – it can be characterized as the child of the former and the sibling of the latter - it does not even treat them as its equals.  It treats its parent, philosophy, as unworthy of consideration.  And although it tolerates and uses mathematics at its convenience, relying on mathematical reasoning at almost every turn, it acknowledges the remarkable obedience of objective reality to mathematical principles as little more than a cosmic “lucky break”.  

Science is able to enjoy its meretricious relationship with mathematics precisely because of its queenly dismissal of philosophy.  By refusing to consider the philosophical relationship between the abstract and the concrete on the supposed grounds that philosophy is inherently impractical and unproductive, it reserves the right to ignore that relationship even while exploiting it in the construction of scientific theories.  And exploit the relationship it certainly does!  There is a scientific platitude stating that if one cannot put a number to one's data, then one can prove nothing at all.  But insofar as numbers are arithmetically and algebraically related by various mathematical structures, the platitude amounts to a thinly veiled affirmation of the mathematical basis of knowledge. 

Although scientists like to think that everything is open to scientific investigation, they have a rule that explicitly allows them to screen out certain facts.  This rule is called the scientific method.  Essentially, the scientific method says that every scientist’s job is to (1) observe something in the world, (2) invent a theory to fit the observations, (3) use the theory to make predictions, (4) experimentally or observationally test the predictions, (5) modify the theory in light of any new findings, and (6) repeat the cycle from step 3 onward.  But while this method is very effective for gathering facts that match its underlying assumptions, it is worthless for gathering those that do not. 

In fact, if we regard the scientific method as a theory about the nature and acquisition of scientific knowledge (and we can), it is not a theory of knowledge in general.  It is only a theory of things accessible to the senses.  Worse yet, it is a theory only of sensible things that have two further attributes: they are non-universal and can therefore be distinguished from the rest of sensory reality, and they can be seen by multiple observers who are able to “replicate” each other’s observations under like conditions.  Needless to say, there is no reason to assume that these attributes are necessary even in the sensory realm.  The first describes nothing general enough to coincide with reality as a whole – for example, the homogeneous medium of which reality consists, or an abstract mathematical principle that is everywhere true - and the second describes nothing that is either subjective, like human consciousness, or objective but rare and unpredictable…e.g. ghosts, UFOs and yetis, of which jokes are made but which may, given the number of individual witnesses reporting them, correspond to real phenomena. 

The fact that the scientific method does not permit the investigation of abstract mathematical principles is especially embarrassing in light of one of its more crucial steps: “invent a theory to fit the observations.”  A theory happens to be a logical and/or mathematical construct whose basic elements of description are mathematical units and relationships.  If the scientific method were interpreted as a blanket description of reality, which is all too often the case, the result would go something like this: “Reality consists of all and only that to which we can apply a protocol which cannot be applied to its own (mathematical) ingredients and is therefore unreal.”  Mandating the use of “unreality” to describe “reality” is rather questionable in anyone’s protocol.  

What about mathematics itself?  The fact is, science is not the only walled city in the intellectual landscape.  With equal and opposite prejudice, the mutually exclusionary methods of mathematics and science guarantee their continued separation despite the (erstwhile) best efforts of philosophy.  While science hides behind the scientific method, which effectively excludes from investigation its own mathematical ingredients, mathematics divides itself into “pure” and “applied” branches and explicitly divorces the “pure” branch from the real world.  Notice that this makes “applied” synonymous with “impure”.  Although the field of applied mathematics by definition contains every practical use to which mathematics has ever been put, it is viewed as “not quite mathematics” and therefore beneath the consideration of any “pure” mathematician.   

In place of the scientific method, pure mathematics relies on a principle called the axiomatic method.  The axiomatic method begins with a small number of self-evident statements called axioms and a few rules of inference through which new statements, called theorems, can be derived from existing statements.  In a way parallel to the scientific method, the axiomatic method says that every mathematician’s job is to (1) conceptualize a class of mathematical objects; (2) isolate its basic elements, its most general and self-evident principles, and the rules by which its truths can be derived from those principles; (3) use those principles and rules to derive theorems, define new objects, and formulate new propositions about the extended set of theorems and objects; (4) prove or disprove those propositions; (5) where the proposition is true, make it a theorem and add it to the theory; and (6) repeat from step 3 onwards. 

The scientific and axiomatic methods are like mirror images of each other, but located in opposite domains.  Just replace “observe” with “conceptualize” and “part of the world” with “class of mathematical objects”, and the analogy practically completes itself.  Little wonder, then, that scientists and mathematicians often profess mutual respect.  However, this conceals an imbalance.  For while the activity of the mathematician is integral to the scientific method, that of the scientist is irrelevant to mathematics (except for the kind of scientist called a “computer scientist”, who plays the role of ambassador between the two realms).  At least in principle, the mathematician is more necessary to science than the scientist is to mathematics. 

As a philosopher might put it, the scientist and the mathematician work on opposite sides of the Cartesian divider between mental and physical reality.  If the scientist stays on his own side of the divider and merely accepts what the mathematician chooses to throw across, the mathematician does just fine.  On the other hand, if the mathematician does not throw across what the scientist needs, then the scientist is in trouble.  Without the mathematician’s functions and equations from which to build scientific theories, the scientist would be confined to little more than taxonomy.  As far as making quantitative predictions were concerned, he or she might as well be guessing the number of jellybeans in a candy jar.  

From this, one might be tempted to theorize that the axiomatic method does not suffer from the same kind of inadequacy as does the scientific method…that it, and it alone, is sufficient to discover all of the abstract truths rightfully claimed as “mathematical”.  But alas, that would be too convenient.  In 1931, an Austrian mathematical logician named Kurt Gödel proved that there are true mathematical statements that cannot be proven by means of the axiomatic method.  Such statements are called “undecidable”.  Gödel’s finding rocked the intellectual world to such an extent that even today, mathematicians, scientists and philosophers alike are struggling to figure out how best to weave the loose thread of undecidability into the seamless fabric of reality. 

To demonstrate the existence of undecidability, Gödel used a simple trick called self-reference.  Consider the statement “this sentence is false.”  It is easy to dress this statement up as a logical formula.  Aside from being true or false, what else could such a formula say about itself?  Could it pronounce itself, say, unprovable?  Let’s try it: "This formula is unprovable".  If the given formula is in fact unprovable, then it is true and therefore a theorem.  Unfortunately, the axiomatic method cannot recognize it as such without a proof.  On the other hand, suppose it is provable.  Then it is self-apparently false (because its provability belies what it says of itself) and yet true (because provable without respect to content)!  It seems that we still have the makings of a paradox…a statement that is "unprovably provable" and therefore absurd.  

But what if we now introduce a distinction between levels of proof?  For example, what if we define a metalanguage as a language used to talk about, analyze or prove things regarding statements in a lower-level object language, and call the base level of Gödel’s formula the "object" level and the higher (proof) level the "metalanguage" level?  Now we have one of two things: a statement that can be metalinguistically proven to be linguistically unprovable, and thus recognized as a theorem conveying valuable information about the limitations of the object language, or a statement that cannot be metalinguistically proven to be linguistically unprovable, which, though uninformative, is at least no paradox.  Voilà: self-reference without paradox!  It turns out that "this formula is unprovable" can be translated into a generic example of an undecidable mathematical truth.  Because the associated reasoning involves a metalanguage of mathematics, it is called “metamathematical”.

It would be bad enough if undecidability were the only thing inaccessible to the scientific and axiomatic methods together. But the problem does not end there.  As we noted above, mathematical truth is only one of the things that the scientific method cannot touch.  The others include not only rare and unpredictable phenomena that cannot be easily captured by microscopes, telescopes and other scientific instruments, but things that are too large or too small to be captured, like the whole universe and the tiniest of subatomic particles; things that are “too universal” and therefore indiscernable, like the homogeneous medium of which reality consists; and things that are “too subjective”, like human consciousness, human emotions, and so-called “pure qualities” or qualia.  Because mathematics has thus far offered no means of compensating for these scientific blind spots, they continue to mark holes in our picture of scientific and mathematical reality. 

But mathematics has its own problems.  Whereas science suffers from the problems just described – those of indiscernability and induction, nonreplicability and subjectivity - mathematics suffers from undecidability.  It therefore seems natural to ask whether there might be any other inherent weaknesses in the combined methodology of math and science.  There are indeed.  Known as the Lowenheim-Skolem theorem and the Duhem-Quine thesis, they are the respective stock-in-trade of disciplines called model theory and the philosophy of science (like any parent, philosophy always gets the last word).  These weaknesses have to do with ambiguity…with the difficulty of telling whether a given theory applies to one thing or another, or whether one theory is “truer” than another with respect to what both theories purport to describe.  

But before giving an account of Lowenheim-Skolem and Duhem-Quine, we need a brief introduction to model theory.  Model theory is part of the logic of “formalized theories”, a branch of mathematics dealing rather self-referentially with the structure and interpretation of theories that have been couched in the symbolic notation of mathematical logic…that is, in the kind of mind-numbing chicken-scratches that everyone but a mathematician loves to hate.  Since any worthwhile theory can be formalized, model theory is a sine qua non of meaningful theorization.  

Let’s make this short and punchy. We start with propositional logic, which consists of nothing but tautological, always-true relationships among sentences represented by single variables.  Then we move to predicate logic, which considers the content of these sentential variables…what the sentences actually say.  In general, these sentences use symbols called quantifiers to assign attributes to variables semantically representing mathematical or real-world objects.  Such assignments are called “predicates”.  Next, we consider theories, which are complex predicates that break down into systems of related predicates; the universes of theories, which are the mathematical or real-world systems described by the theories; and the descriptive correspondences themselves, which are called interpretations.  A model of a theory is any interpretation under which all of the theory’s statements are true.  If we refer to a theory as an object language and to its referent as an object universe, the intervening model can only be described and validated in a metalanguage of the language-universe complex. 

Though formulated in the mathematical and scientific realms respectively, Lowenheim-Skolem and Duhem-Quine can be thought of as opposite sides of the same model-theoretic coin.  Lowenheim-Skolem says that a theory cannot in general distinguish between two different models; for example, any true theory about the numeric relationship of points on a continuous line segment can also be interpreted as a theory of the integers (counting numbers).  On the other hand, Duhem-Quine says that two theories cannot in general be distinguished on the basis of any observation statement regarding the universe. 

Just to get a rudimentary feel for the subject, let’s take a closer look at the Duhem-Quine Thesis.  Observation statements, the raw data of science, are statements that can be proven true or false by observation or experiment.  But observation is not independent of theory; an observation is always interpreted in some theoretical context. So an experiment in physics is not merely an observation, but the interpretation of an observation.  This leads to the Duhem Thesis, which states that scientific observations and experiments cannot invalidate isolated hypotheses, but only whole sets of theoretical statements at once.  This is because a theory T composed of various laws {Li}, i=1,2,3,… almost never entails an observation statement except in conjunction with various auxiliary hypotheses {Aj}, j=1,2,3,… .  Thus, an observation statement at most disproves the complex {Li+Aj}.   

To take a well-known historical example, let T = {L1,L2,L3} be Newton’s three laws of motion, and suppose that these laws seem to entail the observable consequence that the orbit of the planet Uranus is O.  But in fact, Newton’s laws alone do not determine the orbit of Uranus.  We must also consider things like the presence or absence of other forces, other nearby bodies that might exert appreciable gravitational influence on Uranus, and so on.  Accordingly, determining the orbit of Uranus requires auxiliary hypotheses like A1 = “only gravitational forces act on the planets”, A2 = “the total number of solar planets, including Uranus, is 7,” et cetera.  So if the orbit in question is found to differ from the predicted value O, then instead of simply invalidating the theory T of Newtonian mechanics, this observation invalidates the entire complex of laws and auxiliary hypotheses {L1,L2,L3;A1,A2,…}.  It would follow that at least one element of this complex is false, but which one?  Is there any 100% sure way to decide? 

As it turned out, the weak link in this example was the hypothesis A2 = “the total number of solar planets, including Uranus, is 7”.  In fact, there turned out to be an additional large planet, Neptune, which was subsequently sought and located precisely because this hypothesis (A2) seemed open to doubt.  But unfortunately, there is no general rule for making such decisions.  Suppose we have two theories T1 and T2 that predict observations O and not-O respectively.  Then an experiment is crucial with respect to T1 and T2 if it generates exactly one of the two observation statements O or not-O.  Duhem’s arguments show that in general, one cannot count on finding such an experiment or observation.  In place of crucial observations, Duhem cites le bon sens (good sense), a non-logical faculty by means of which scientists supposedly decide such issues.  Regarding the nature of this faculty, there is in principle nothing that rules out personal taste and cultural bias.  That scientists prefer lofty appeals to Occam’s razor, while mathematicians employ justificative terms like beauty and elegance, does not exclude less savory influences.
So much for Duhem; now what about Quine?  The Quine thesis breaks down into two related theses.  The first says that there is no distinction between analytic statements (e.g. definitions) and synthetic statements (e.g. empirical claims), and thus that the Duhem thesis applies equally to the so-called a priori disciplines.  To make sense of this, we need to know the difference between analytic and synthetic statements.  Analytic statements are supposed to be true by their meanings alone, matters of empirical fact notwithstanding, while synthetic statements amount to empirical facts themselves.  Since analytic statements are necessarily true statements of the kind found in logic and mathematics, while synthetic statements are contingently true statements of the kind found in science, Quine’s first thesis posits a kind of equivalence between mathematics and science.  In particular, it says that epistemological claims about the sciences should apply to mathematics as well, and that Duhem’s thesis should thus apply to both. 

Quine’s second thesis involves the concept of reductionism.  Reductionism is the claim that statements about some subject can be reduced to, or fully explained in terms of, statements about some (usually more basic) subject.  For example, to pursue chemical reductionism with respect to the mind is to claim that mental processes are really no more than biochemical interactions.  Specifically, Quine breaks from Duhem in holding that not all theoretical claims, i.e. theories, can be reduced to observation statements.  But then empirical observations “underdetermine” theories and cannot decide between them.  This leads to a concept known as Quine’s holism; because no observation can reveal which member(s) of a set of theoretical statements should be re-evaluated, the re-evaluation of some statements entails the re-evaluation of all. 

Quine combined his two theses as follows.  First, he noted that a reduction is essentially an analytic statement to the effect that one theory, e.g. a theory of mind, is defined on another theory, e.g. a theory of chemistry.  Next, he noted that if there are no analytic statements, then reductions are impossible.  From this, he concluded that his two theses were essentially identical.  But although the resulting unified thesis resembled Duhem’s, it differed in scope. For whereas Duhem had applied his own thesis only to physical theories, and perhaps only to theoretical hypothesis rather than theories with directly observable consequences, Quine applied his version to the entirety of human knowledge, including mathematics.  If we sweep this rather important distinction under the rug, we get the so-called “Duhem-Quine thesis”. 

Because the Duhem-Quine thesis implies that scientific theories are underdetermined by physical evidence, it is sometimes called the Underdetermination Thesis.  Specifically, it says that because the addition of new auxiliary hypotheses, e.g. conditionals involving “if…then” statements, would enable each of two distinct theories on the same scientific or mathematical topic to accommodate any new piece of evidence, no physical observation could ever decide between them.  

The messages of Duhem-Quine and Lowenheim-Skolem are as follows: universes do not uniquely determine theories according to empirical laws of scientific observation, and theories do not uniquely determine universes according to rational laws of mathematics.  The model-theoretic correspondence between theories and their universes is subject to ambiguity in both directions.  If we add this descriptive kind of ambiguity to ambiguities of measurement, e.g. the Heisenberg Uncertainty Principle that governs the subatomic scale of reality, and the internal theoretical ambiguity captured by undecidability, we see that ambiguity is an inescapable ingredient of our knowledge of the world.  It seems that math and science are…well, inexact sciences. 

How, then, can we ever form a true picture of reality?  There may be a way.  For example, we could begin with the premise that such a picture exists, if only as a “limit” of theorization (ignoring for now the matter of showing that such a limit exists).  Then we could educe categorical relationships involving the logical properties of this limit to arrive at a description of reality in terms of reality itself.  In other words, we could build a self-referential theory of reality whose variables represent reality itself, and whose relationships are logical tautologies.  Then we could add an instructive twist.  Since logic consists of the rules of thought, i.e. of mind, what we would really be doing is interpreting reality in a generic theory of mind based on logic.  By definition, the result would be a cognitive-theoretic model of the universe 

Gödel used the term incompleteness to describe that property of axiomatic systems due to which they contain undecidable statements.  Essentially, he showed that all sufficiently powerful axiomatic systems are incomplete by showing that if they were not, they would be inconsistent.  Saying that a theory is “inconsistent” amounts to saying that it contains one or more irresolvable paradoxes.  Unfortunately, since any such paradox destroys the distinction between true and false with respect to the theory, the entire theory is crippled by the inclusion of a single one.  This makes consistency a primary necessity in the construction of theories, giving it priority over proof and prediction.  A cognitive-theoretic model of the universe would place scientific and mathematical reality in a self-consistent logical environment, there to await resolutions for its most intractable paradoxes. 

For example, modern physics is bedeviled by paradoxes involving the origin and directionality of time, the collapse of the quantum wave function, quantum nonlocality, and the containment problem of cosmology.  Were someone to present a simple, elegant theory resolving these paradoxes without sacrificing the benefits of existing theories, the resolutions would carry more weight than any number of predictions.  Similarly, any theory and model conservatively resolving the self-inclusion paradoxes besetting the mathematical theory of sets, which underlies almost every other kind of mathematics, could demand acceptance on that basis alone.  Wherever there is an intractable scientific or mathematical paradox, there is dire need of a theory and model to resolve it. 

If such a theory and model exist – and for the sake of human knowledge, they had better exist – they use a logical metalanguage with sufficient expressive power to characterize and analyze the limitations of science and mathematics, and are therefore philosophical and metamathematical in nature.  This is because no lower level of discourse is capable of uniting two disciplines that exclude each other’s content as thoroughly as do science and mathematics.  

Now here’s the bottom line: such a theory and model do indeed exist.  But for now, let us satisfy ourselves with having glimpsed the rainbow under which this theoretic pot of gold awaits us.

More information:
» Medium: Explaining the CTMU (Cognitive Theoretic Model Of The Universe)

Thursday, June 22, 2017

The Virtues: Grit

According to the Merriam-Webster dictionary, grit in the context of behavior is defined as “firmness of character; indomitable spirit.” Duckworth, based on her studies, tweaked this definition to be “perseverance and passion for long-term goals.” While I recognize that she is the expert, I questioned her modification…in particular the “long-term goals” part. Some of the grittiest people I’ve known lack the luxury to consider the big picture and instead must react to immediate needs. This doesn’t diminish the value of their fortitude, but rather underscores that grit perhaps is more about attitude than an end game.

But Duckworth’s research is conducted in the context of exceptional performance and success in the traditional sense, so requires it be measured by test scores, degrees, and medals over an extended period of time. Specifically, she explores this question, talent and intelligence/ IQ being equal: why do some individuals accomplish more than others?  It is that distinction which allows her the liberty to evolve the definition, but underscores the importance of defining her context.

The characteristics of grit outlined below include Duckworth’s findings as well as some that defy measurement. Duckworth herself is the first to say that the essence of grit remains elusive. It has hundreds of correlates, with nuances and anomalies, and your level depends on the expression of their interaction at any given point. Sometimes it is stronger, sometimes weaker, but the constancy of your tenacity is based on the degree to which you can access, ignite, and control it. So here are a few of the more salient characteristics to see how you measure up.

While courage is hard to measure, it is directly proportional to your level of grit. More specifically, your ability to manage fear of failure is imperative and a predicator of success. The supremely gritty are not afraid to tank, but rather embrace it as part of a process. They understand that there are valuable lessons in defeat and that the vulnerability of perseverance is requisite for high achievement. Teddy Roosevelt, a Grand Sire of Grit, spoke about the importance of overcoming fear and managing vulnerability in an address he made at the Sorbonne in 1907. He stated:
It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strived valiantly; who errs, who comes again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly.
Fear of failure, or atychiphobia as the medical-set calls it, can be a debilitating disorder, and is characterized by an unhealthy aversion to risk (or a strong resistance to embracing vulnerability). Some symptoms include anxiety, mental blocks, and perfectionism and scientists ascribe it to genetics, brain chemistry, and life experiences. However, don’t be alarmed…the problem is not insurmountable. On Amazon, a “fear of failure” search yields 28,879 results. And while there are millions of different manifestations and degrees of the affliction, a baseline antidote starts with listening to the words of Eleanor Roosevelt: “do something that scares you everyday.” As I noted in a recent post, courage is like a muscle; it has to be exercised daily. If you do, it will grow; ignored, it will atrophy.  Courage helps fuel grit; the two are symbiotic, feeding into and off of each other…and you need to manage each and how they are functioning together.

As a side note, some educators believe that the current trend of coddling our youth, by removing competition in sports for example, is preventing some kids from actually learning how to fail and to embrace it as an inevitable part of life. In our effort to protect our kids from disappointment are we inadvertently harming them? Coddling and cultivating courage may indeed turn out to be irreconcilable bedfellows. As with everything, perhaps the answer lies in the balance…more to come.

Conscientiousness: Achievement Oriented vs. Dependable
As you probably know, it is generally agreed that there are five core character traits from which all human personalities stem called… get this…The Big Five. They are: Openness, Conscientiousness, Extroversion, Agreeableness, and Neurotic. Each exists on a continuum with its opposite on the other end, and our personality is the expression of the dynamic interaction of each and all at any given time. One minute you may feel more agreeable, the next more neurotic, but fortunately, day-to-day, they collectively remain fairly stable for most of us.

According to Duckworth, of the five personality traits, conscientiousness is the most closely associated with grit. However, it seems that there are two types, and how successful you will be depends on what type you are.  Conscientiousness in this context means, careful and painstaking; meticulous. But in a 1992 study, the educator L.M. Hough found the definition to be far more nuanced when applied to tenacity. Hough’s study distinguished achievement from the dependability aspects of conscientiousness.

The achievement-oriented individual is one who works tirelessly, tries to do a good job, and completes the task at hand, whereas the dependable person is more notably self-controlled and conventional. Not surprisingly, Hough discovered that achievement orientated traits predicted job proficiency and educational success far better than dependability. So a self-controlled person who may never step out of line may fail to reach the same heights as their more mercurial friends.  In other words, in the context of conscientious, grit, and success, it is important to commit to go for the gold rather than just show up for practice.  Or, to put it less delicately, it’s better to be a racehorse than an ass.

Long-Term Goals and Endurance: Follow Through
As I wrote in the introduction, I had some reservations about accepting the difference between Webster’s definition of grit and Duckworth’s interpretation. Both have to do with perseverance, but the latter exists in the arena of extraordinary success and therefore requires a long-term time commitment. Well, since you are Forbes readers and destined for the pantheon of extraordinary success, it is important to concede that for you…long-term goals play an important role. Duckworth writes:
“… achievement is the product of talent and effort, the latter a function of the intensity, direction, and duration of one’s exertions towards a long-term goal.”

Malcolm Gladwell agrees. In his 2007 best selling book Outliers, he examines the seminal conditions required for optimal success. We’re talking about the best of the best… Beatles, Bill Gates, Steve Jobs. How did they build such impossibly powerful spheres of influence? Unfortunately, some of Gladwell’ s findings point to dumb luck. Still, the area where Gladwell and Duckworth intersect (and what we can actually control), is on the importance of goals and lots, and lots and lots of practice…10,000 hours to be precise.

Turns out the baseline time commitment required to become a contender, even if predisposed with seemingly prodigious talent, is at least 20 hours a week over 10 years. Gladwell’s 10,000 hours theory and Duckworth's findings align to the hour. However, one of the distinctions between someone who succeeds and someone who is just spending a lot of time doing something is this: practice must have purpose. That’s where long-term goals come in. They provide the context and framework in which to find the meaning and value of your long-term efforts, which helps cultivate drive, sustainability, passion, courage, stamina…grit.

Resilience: Optimism, Confidence, and Creativity
Of course, on your long haul to greatness you’re going to stumble, and you will need to get back up on the proverbial horse. But what is it that gives you the strength to get up, wipe the dust off, and remount? Futurist and author Andrew Zolli says it’s resilience. I’d have to agree with that one.
In Zolli’s book, Resilience, Why Things Bounce Back, he defines resilience as “the ability of people, communities, and systems to maintain their core purpose and integrity among unforeseen shocks and surprises.”

For Zolli, resilience is a dynamic combination of optimism, creativity, and confidence, which together empower one to reappraise situations and regulate emotion – a behavior many social scientists refer to as “hardiness” or “grit.” Zolli takes it even further and explains that “hardiness” is comprised of three tenents: “ (1) the belief one can find meaningful purpose in life, (2) the belief that one can influence one’s surroundings and the outcome of events, and (3) the belief that positive and negative experiences will lead to learning and growth.”

Wait, what? Seems that there is a lot going on here, but this is my take on the situation in an elemental equation. Optimism + Confidence + Creativity = Resilience = Hardiness =(+/- )Grit. So, while a key component of grit is resilience, resilience is the powering mechanism that draws your head up, moves you forward, and helps you persevere despite whatever obstacles you face along the way.  In other words, gritty people believe, “everything will be alright in the end, and if it is not alright, it is not the end.”

Excellence vs. Perfection
In general, gritty people don’t seek perfection, but instead strive for excellence. It may seem that these two have only subtle semantic distinctions; but in fact they are quite at odds. Perfection is excellence’s somewhat pernicious cousin. It is pedantic, binary, unforgiving and inflexible. Certainly there are times when “perfection” is necessary to establish standards, like in performance athletics such as diving and gymnastics. But in general, perfection is someone else’s perception of an ideal, and pursuing it is like chasing a hallucination. Anxiety, low self-esteem, obsessive compulsive disorder, substance abuse, and clinical depression are only a few of the conditions ascribed to “perfectionism.” To be clear, those are ominous barriers to success.

Excellence is an attitude, not an endgame. The word excellence is derived from the Greek word Arête which is bound with the notion of fulfillment of purpose or function and is closely associated with virtue. It is far more forgiving, allowing and embracing failure and vulnerability on the ongoing quest for improvement. It allows for disappointment, and  prioritizes progress over perfection. Like excellence, grit is an attitude about, to paraphrase Tennyson…seeking, striving, finding, and never yielding.

Monday, June 19, 2017

- Our Animal Nature: Fear, Aggression, Sociability

"My purpose in making an extravagant suggestion is to start a discussion. The problem is so real, and nobody is talking about the solution."
The Atlantic:
As personal rights and freedoms have expanded during this century, there has been less and less talk about temperament. Throughout most of history, however, people have been regarded less as unique individuals than as variations on a few basic human types. In the fifth century B.C. Hippocrates described four temperaments, which he considered to be linked to various predominant bodily fluids, or humors: the sanguine temperament is optimistic and energetic, the melancholic is moody and withdrawn, the choleric is irritable and impulsive, and the phlegmatic is calm and slow. However quaint this theory may seem, Hippocrates anticipated modern linkages of biochemistry with behavior and astutely described types of people as familiar today as they were in antiquity.

By the 1940s two powerful ideologies diverted scientists' age-old interest in the biological dimensions of personality. First, Freud asserted the overwhelming importance of personal history in determining what his followers called character. Second, revulsion at Nazism's proclamation of inferior and superior genetic types converged with the spread of democratic ideas to focus academe on racial equality and the formative power of environment. Among the few scientists to express interest in temperament was I. P. Pavlov, the dark prince of conditioning, who observed of his dogs that "the final nervous activity present in the animal is an alloy of the features peculiar to the type and of the changes wrought by the environment." "Excitatory," choleric dogs, like Slick Willy, were by nature "pugnacious, passionate, and easily and quickly irritated," while the "inhibitory," or melancholic, animal "believes in nothing, hopes for nothing, in everything he sees the dark side." Of the two stabler sorts that Pavlov observed, one was "self-contained and quiet; persistent and steadfast," and the other "energetic and very productive" but easily bored. Such insights, however, were dwarfed by mountains of literature on what our mothers did to us.

Like the ghost in the machine or the mind in the brain, temperament is best glimpsed in action. To discern it, watch a person communicate, says Hagop Akiskal, the senior science adviser on affective and related disorders at the National Institute of Mental Health: "It's not just a matter of personality but something more basic that has to do with rhythm, reactivity, emotion." Of all species, Homo sapiens has the most feelings. Just as drives such as hunger and sleep are more flexible than reflexes like the eye blink and the knee jerk, emotions, which are physiological as well as psychological events, give us more behavioral options than do drives. Some emotions are so basic and universal that the psychologist Hans Eysenck, a pioneer of the modern biological study of personality, who conducts research in London at the Institute of Psychiatry, believes that they're nothing less than the lowest common denominators of human experience. "We've done our studies in thirty-six countries," he says, "and everywhere we find the same three ways in which behavior can differ." To varying degrees all people express fear, which helps us avoid danger; aggression, which enables us to fight it; and extraversion, or sociability, which enables us to face it with equanimity. Fundamentally, our temperaments are distinguished by the traits of anxiety, irritability, and élan.

That our natures are organized around our habitual reactions to threat has given Philip Gold, a research psychiatrist who is the chief of the neuroendocrinology branch of the NIMH, a "tragic view of the human condition." Physical or emotional, real or perceived, danger lurks everywhere, and from an evolutionary perspective our species' great asset and, sometimes, liability is an extremely sensitive emotional and physiological arousal system that detects and reacts to it. This is the stress, or "fight or flight," response. The stable sorts of people whom modern researchers describe as uninhibited, bold, or relaxed can cope with life's vicissitudes—from a snake in the jungle to a fire-breathing boss—in a manner Gold describes as "philosophical," because their stress response isn't triggered by every little thing and doesn't stay on red alert longer than necessary. These resilient people are innately disposed, Gold says, "to celebrate the beauty of existence and the wonders of an interior life and external connections despite being surrounded by unanswerable questions, ambiguous dilemmas, and the certainty of loss and death."

Those who naturally react to the threatening or the merely unfamiliar with an excess of either the flight or the fight response are in for more trouble. Because their stress response spikes frequently and ebbs slowly, Hippocrates' melancholics, whom scientists now describe as anxious, inhibited, or reactive, are so worn down that they are apt to behave in what Gold calls a depressive way: "Faced with a setback, for example, they say it occurred because they're worthless." To protect themselves, the flight-prone often cultivate an avoidant way of life that worsens their plight. "They're likelier to survive in truly threatening situations," Gold says, "but they have less comfortable lives." Hippocrates' cholerics, like Willy and Burton, respond to stress by going into fight mode. To these people, whom researchers variously call aggressive, impulsive, or irritable, the dark possibility of pain and defeat is so intense, Gold says, "that they can't bear to be accountable for it in a depressive way." Instead they blame it on others, and strike out. Although the bias toward one of these fundamental emotional tones, or temperaments, "has to do with what a person has learned he has to be in order to be loved," Gold says, "it also has to do with genetic factors that biologically predispose him to respond in a certain way to the paradigmatic human situations of pleasure and opportunity, danger and loss." He continues, "In the blood-and-guts world of challenges, these differences in the stress response account for the fundamental parameters of what people are like."

Monday, June 12, 2017

- The Evolution of Birth Control

' Thruppenny upright'             
no birth control/Zeus did it
Nuns convent found an orphan  
Ergotamine/Silphium plant
Madam Millie: intravaginal Coca-Cola    
Henry VIII's Catherine: intra smooth pebbl
Pennyroyal herb/tea
19th century: copper coin, pH
Tansy, blue and black kohas
13th century: Peter of Spain contracept
Roman times condoms pig guts, lamb
China: oil silked paper or lamb skinr, Japan tortoise shell ir animal horn
Alaska: half an orange as diaphragm
Wild west: silver dollar as diaphragm

Friday, June 9, 2017

Real Deal #33: Why Being Kind is Good for Your Health

Dr. Michele Borba, child psychologist: "Empathy is a key ingredient of resilience, the foundation to trust, the benchmark of humanity and core to everything that makes a society civilized."


Quiet Revolution:
We all know the golden rule: treat others the way you want to be treated. While this is an old adage we learn from an early age, there are a number of real-life benefits associated with the way we treat others. Science shows that as children, we’re biologically wired to be kind and we can further develop this trait with practice and repetition. Sometimes, however, due to outside influences and the stress of our day-to-day lives, we can lose this inherent ability. 

Kindness and empathy help us relate to other people and have more positive relationships with friends, family, and even perfect strangers we encounter in our daily lives. Besides just improving personal relationships, however, kindness can actually make you healthier. 


Here are science-backed ways to improve your health through kindness:

Kindness releases feel-good hormones

Have you ever noticed that when you do something nice for someone else, it makes you feel better too? This isn’t just something that happens randomly—it has to do with the pleasure centers in your brain. 

Doing nice things for others boosts your serotonin, the neurotransmitter responsible for feelings of satisfaction and well-being. Like exercise, altruism also releases endorphins, a phenomenon known as a “helper’s high.”

Research from Psychology professor Sonja Lyubomirsky reports that when we're kind to another person, we feel more optimistic and positive. In addition to fostering feel-good emotions, kindness and empathy toward others is actually good for our health.

Kindness eases anxiety

Anxiety, whether it’s mild nervousness or severe panic, is an extremely common human experience. While there are several ways to reduce anxiety, such as meditation, exercise, prescription medications, and natural remedies, it turns out that being nice to others can be one of the easiest, most inexpensive ways to keep anxiety at bay.

As pointed out in a study on happiness from the University of British Columbia (UBC), “social anxiety is associated with low positive affect (PA), a factor that can significantly affect psychological well-being and adaptive functioning.” Positive affect refers to an individual’s experience of positive moods such as joy, interest, and alertness. UBC researchers found that participants who engaged in kind acts displayed significant increases in PA that were sustained over the four weeks of the study.

Performing good deeds for others, even over as little as a 10-day span, has been reported to boost happiness and life satisfaction.

Even a small gesture can make a big difference.

Kindness is good for your heart

Making others feel good can “warm” your heart, sure—but being nice to others can also affect the actual chemical balance of your heart. 

Kindness releases the hormone oxytocin. According to Dr. David Hamilton, “oxytocin causes the release of a chemical called nitric oxide in blood vessels, which dilates (expands) the blood vessels. This reduces blood pressure and therefore oxytocin is known as a ‘cardioprotective’ hormone because it protects the heart (by lowering blood pressure).” 

Kindness strengthens your heart physically and emotionally. Maybe that’s why they say nice, caring people have really big hearts?

It can help you live longer

You may be shaking your head at this one, but we’re not just saying this—there’s science to back it up. According to, you’re at a greater risk of heart disease if you don’t have a strong network of family and friends. When you’re kind to others, you develop strong, meaningful relationships and friendships.

So, go ahead and make some new friends, or expand your kindness and compassion to the ones you already have.

It reduces stress

In our busy, always-on-the-go lives, we’re constantly looking for ways to reduce stress. It may be easier than we think. Helping others lets you get outside of yourself and take a break from the stressors in your own life, and this behavior can also make you better equipped to handle stressful situations. 

Affiliative behavior is any behavior that builds your relationships with others. According to a study on the effects of prosocial behavior on stress, “affiliative behavior may be an important component of coping with stress and indicate that engaging in prosocial behavior (action intended to help others) might be an effective strategy for reducing the impact of stress on emotional functioning.”

Kindness prevents illness

Inflammation in the body is associated with all sorts of health problems such as diabetes, cancer, chronic pain, obesity, and migraines. According to a study of adults aged 57-85, “volunteering manifested the strongest association with lower levels of inflammation.” Oxytocin also reduces inflammation, and even little acts of kindness can trigger oxytocin’s release. 


Spending time each day to cultivate an attitude of compassion promotes happiness and life satisfaction and helps it come more naturally to kids and adults alike:
 "Studies illustrate that kids' ability to feel for others affects their health, wealth and authentic happiness as well as their emotional, social, cognitive development and performance," explains Michele Borba, child psychologist. "Empathy activates conscience and moral reasoning, improves happiness, curbs bullying and aggression, enhances kindness and peer inclusiveness, reduces prejudice and racism, promotes heroism and moral courage and boosts relationship satisfaction."

Scientific studies have shown that spreading kindness creates a ripple effect (3-degrees of separation) spreading outward and touching others' lives.

Kindness may be the secret sauce to a healthy, happy life. So, go ahead and volunteer, help someone in need, buy someone coffee or lunch, or try one of these ideas—it may be just the pick-me-up you need.

More information
» Emma Seppala, PhD: "The Science of Compassion"

Sunday, June 4, 2017

Real Deal #32: Why Unfocusing is Just As Important as Focusing


According to the Alternative Board’s 2017 Small Business Pulse Survey, 85 percent of entrepreneurs surveyed said they work 40-plus hours a week, and the majority felt that they were "too busy" to develop strategic plans for their businesses. As a neuroscientist who is also an entrepreneur, I don't find it hard to imagine why this hive of activity causes strategic thinking to fall by the wayside. Simply put, it overwhelms the brain.

The reason is that, entrepreneurs, feeling panicked, scramble to tune out all distractions and devote their undivided attention to each task on their list. But what if I told you this isn't the best thing you could do? What if I said you should instead doodle pictures of faces, geometric shapes, letters or some form of art -- gorgeous or obscure -- while you complete your tasks?

In fact, doodling activates the default mode network (DMN) -- the brain’s unfocus circuit. And, don’t let the word “unfocus” fool you, either, because the DMN is all action. When turned on, it becomes one of the greatest consumers of energy in the brain, eating up a whopping 20 percent of the body’s energy at rest.

It is constantly shuttling memories back and forth and making connections that lead to creative insights and more accurate predictions -- all things an entrepreneur can cherish.

Furthermore, when the DMN is activated, your "self" metaphorically assumes center stage in the brain. In this state of self-connectedness, you become a far superior mirror of others’ perspectives, allowing you to better empathize with your clients and cohorts. Ultimately, with these deeper insights about yourself and others, your brain becomes a master predictor. It is better prepared to make clear, heartfelt, high-level decisions in the spur of the moment.


Harvard Business Review:
There are many simple and effective ways to activate this circuit in the course of a day.

Using positive constructive daydreaming (PCD): PCD is a type of mind-wandering different from slipping into a daydream or guiltily rehashing worries. When you build it into your day deliberately, it can boost your creativity, strengthen your leadership ability, and also-re-energize the brain. To start PCD, you choose a low-key activity such as knitting, gardening or casual reading, then wander into the recesses of your mind. But unlike slipping into a daydream or guilty-dysphoric daydreaming, you might first imagine something playful and wishful—like running through the woods, or lying on a yacht. Then you swivel your attention from the external world to the internal space of your mind with this image in mind while still doing the low-key activity.

Studied for decades by Jerome Singer, PCD activates the DMN and metaphorically changes the silverware that your brain uses to find information. While focused attention is like a fork—picking up obvious conscious thoughts that you have, PCD commissions a different set of silverware—a spoon for scooping up the delicious mélange of flavors of your identity (the scent of your grandmother, the feeling of satisfaction with the first bite of apple-pie on a crisp fall day), chopsticks for connecting ideas across your brain (to enhance innovation), and a marrow spoon for getting into the nooks and crannies of your brain to pick up long-lost memories that are a vital part of your identity. In this state, your sense of “self” is enhanced—which, according to Warren Bennis, is the essence of leadership. I call this the psychological center of gravity, a grounding mechanism (part of your mental “six-pack”) that helps you enhance your agility and manage change more effectively too.

Taking a nap: In addition to building in time for PCD, leaders can also consider authorized napping. Not all naps are the same. When your brain is in a slump, your clarity and creativity are compromised. After a 10-minute nap, studies show that you become much clearer and more alert. But if it’s a creative task you have in front of you, you will likely need a full 90 minutes for more complete brain refreshing. Your brain requires this longer time to make more associations, and dredge up ideas that are in the nooks and crannies of your memory network.

Pretending to be someone else: When you’re stuck in a creative process, unfocus may also come to the rescue when you embody and live out an entirely different personality. A recent column in Harvard Business Review highlighted the importance of “unfocusing,” and more specifically, the results of a study demonstrating something called the “stereotype effect.” What’s particularly striking about these results is that they portray creativity not as a trait that you either have or don’t have, but as something that can be shaped depending on the context.

No one is necessarily a “creative person;” creativity can really be harnessed by anybody, according to these results. By allowing that sort of underused part of the brain to do its thing, we manage to think and behave more creatively than we normally would.

In 2016, educational psychologists, Denis Dumas and Kevin Dunbar found that people who try to solve creative problems are more successful if they behave like an eccentric poet than a rigid librarian. Given a test in which they have to come up with as many uses as possible for any object (e.g. a brick) those who behave like eccentric poets have superior creative performance. This finding holds even if the same person takes on a different identity.

When in a creative deadlock, try this exercise of embodying a different identity. It will likely get you out of your own head, and allow you to think from another person’s perspective. I call this "psychological halloweenism."

Imagine the unimaginable: Making a mental movie of a desired outcome aids execution because imagery warms up the action brainMultiple studies confirm that training stroke patients to imagine moving slowed or paralyzed parts of their bodies can actually improve movement in those areas, especially when the patients concentrate intensely on those images.

De-stress: Stress cements bad habits in the brain and prevents people from embracing new patterns of thinking. Throughout history, it’s been known to cause even the most prolific business leaders to make unwise decisions on behalf of their companies. Try to identify, explore and dispel the stressors that often derail the brain. 

Yun RJ, Krystal JH, Mathalon DH. "Working memory overload: fronto-limbic interactions and effects on subsequent working memory function." Brain Imaging Behav. 2010 Mar; 4(1):96-108. doi: 10.1007/s11682-010-9089-9.

Denis Dumas and Kevin N. Dunbar. "The Creative Stereotype Effect." PLoS One. 2016; 11(2): e0142567.

Istvan Molnar-Szakacs1,2 and Lucina Q. Uddin. "Self-Processing and the Default Mode Network: Interactions with the Mirror Neuron System." Front Hum Neurosci. 2013; 7: 571. doi: 10.3389/fnhum.2013.00571.

Anna Abraham. "The World According to Me: Personal Relevance and the Medial Prefrontal Cortex." Front Hum Neurosci. 2013; 7: 341. doi: 10.3389/fnhum.2013.00341.


For years, focus has been the venerated ability amongst all abilities. Since we spend 46.9% of our days with our minds wandering away from a task at hand, we crave the ability to keep it fixed and on task. Yet, if we built doodling, PCD, 10- and 90- minute naps, and psychological halloweenism into our days, we would likely preserve focus for when we need it, and use it much more efficiently too. More importantly, unfocus will allow us to update information in the brain, giving us access to deeper parts of ourselves and enhancing our agility, creativity and decision-making too.

More information:
» Entrepreneur: "Why Successful Leaders Should Find Time to Doodle"
» Entrepreneur: "Solving the Engagement Conundrum Through Brain Science"
» New York Mag: "In Praise of Spacing Out"