SEELE en “Las Mentes del marketing político”

Les compartimos la reciente participación de nuestro director general, el prof. César Monroy Fonseca, en el programa de entrevistas “Las mentes del marketing político” donde explica de forma sencilla y clara la incursión de la neurociencia en el tema de los estudios de conocimiento de la ciudadanía.

Dar click aquí



SEELE Neuroscience presente en 6to Simposio En Desarrollo Organizacional y Humano

Este próximo jueves 30 de marzo nuestro director científico dictará el taller “Liderazgo no verbal y la ciencia detrás de la empatía” para desmentir todos los mitos que circulan bajo el mote de neuroliderazgo para dar paso a la neurociencia de verdad. Se expondrán las razones por las cuales la oxitocina no es la hormona del amor así como las neuronas espejo no tienen nada que ver con la empatía. ¡Nos vemos en Los Cabos, Baja California Sur!

Google, self-serving bias and the fall of knowledge

Google, self-serving bias and the fall of knowledge

A SEELE Neuroscience original open research

Fake news is the epitome of Internet credibility crisis. When I was a child, my English teacher shared with us a reading about the infamous massive panic Orson Welles caused when reading on live radio broadcast a fragment of “The War of The Worlds” back in 1938.  The lesson at that day was, “on those years, people was too naïve toward mass media and judged everything as true”. Years have passed and we are facing an astonishing similar reality with internet. The recent events in US elections have demonstrated that most internet users do not distinguish if some news site is fake, satirical or propagandistic.

The fact that using Google Search and similar web browsers creates the illusion of knowledge is well documented in several journal papers, especially in the one titled Searching for explanations: How the Internet inflates estimates of internal knowledge by authors Fisher, Matthew; Goddu, Mariel K. and Keil, Frank C. published back in 2015 in the Journal of Experimental Psychology. We, at SEELE Neuroscience replicated one of the experiments of the paper with the intention to evaluate the participation of the Self-Serving cognitive bias when judging the own knowledge.


We generated a standard “general knowledge” quiz with ten items selected from an 8th grade knowledge compendium. A special Google Forms test with multiple choices was designed and uploaded with two sections: the first with the ten questions and corresponding multiple-choice answers. The second section with a self- assessment item with the following question: “How much of your knowledge a person should require to pass all questions of this survey?” Participants had three options: “less knowledge than mine”, “the same knowledge than mine” and “more knowledge than mine”.

Two independent researchers recruited people through social media with different kind of instructions. Researcher A recruited Group A with the following instruction: “This is a knowledge survey, please answer the questions of the following link. If you need to verify your responses, you can rely on Google browser but don’t use it to search for the answer”.  Researcher B recruited group B with the instruction: “This is a knowledge survey, please answer the questions of the following link. As being a knowledge test, we encourage you on relying on your own without the use of external information”. In order to “pass” the quiz, a minimum of six correct answers were required but participants did not received feedback about their performance.


A total of 89 participants were validated for group A and 191 participants for group B after eliminating subjects not following instructions. Participants had an average age of 21 years old and gender proportion was ensured to 1 to 1. A statistical difference was detected (z= 2.898;   P = 0.004 Yates corrected) when comparing the proportion of subjects between Group A (0.067) and Group B (0.215) who answered the self-assessment “more knowledge than mine” when asked about the amount of their knowledge someone should require to answer correctly all items of the quiz (-0.147, 95% CI  -0.241 to -0.0532). This difference was also significant (z= 2.980;   P = 0.003 Yates corrected) for the answer “the same knowledge than mine” (0.157, 95%CI 0.0591 to 0.255) between Group A (0.921) and Group B (0.764). No differences were detected between those answering “less knowledge than mine” (A=0.011, B= 0.021; z= 0.0865;   P = 0.931 Yates corrected).

Group A vs Group B self-asessment

Despite having Group A a significant over-esteem of their self-knowledge when compared to Group B, there were no statistical differences on the outcome of the results of the quiz. The proportion of subjects who passed the quiz in Group A (0.629) and in Group B (0.660) was equivalent (z= 0.363;   P = 0.716 Yates corrected) meaning that those who benefited from Google Search in fact did not performed better than those who relied on their own knowledge.

Performace Group A vs Group B


Self-serving bias is a well-studied cognitive bias where the perception reality, events, or an outcome is distorted in a way beneficial for oneself. It is usually linked to tendency of enhancing self-esteem but this is a very simplistic approach. A common scene of self-serving bias is when someone is sure to win the lottery with one single ticket, but after losing, this person immediately claims that he does not need that much money anyway. The core aspect of this bias is the actual distortion of perception, which can be innocuous as in the lottery example or extremely harmful when it becomes massive as in social media.

In this experiment we can see how the simple use of a web browser is enough factor to distort the perception of the performance in a general knowledge test and evaluate the own knowledge as enough to answer correctly all answers despite the actual performance of these participants was not superior to those who used  their own knowledge. Internet credibility crisis does is a symptom of having massive users overestimating their own knowledge or understanding of facts? This would explain why after unmasking news as fake, those who retweeted or shared it, don’t really care about it, and have always a (mostly self-serving) explanation of considering irrelevant if fake or not.

In addition, there is an increasing tendency of not-sure-by-default sharing, where users propagate news, pictures and viral videos with the self-serving warning “might not be true, but it may be” label. This generates a cognitive protection against misinformation criticism, as there is a warning of not claiming the shared tale as true, but sharing it anyway because it “might” be true. We face an era where knowledge seems to be unessential to judge reality and it much more important being the one who “shared it first”, or more critical, being the one who “holds the truth”. While might sound catastrophic, social media will soon require controls about the sharing of some content. Google and Facebook have begun this by starting running especial algorithms to detect fake news but the task is no easy. Some sites with legit news are in fact propagandistic portals that skew data in favor of their own interests.  The study of cognitive biases might be helpful on shaping the immediate future of social media.

About SEELE Neuroscience:

We are the leading lab in Latin America  specialized in translational neuroscience for the private sector with more than 10 years of expertise and six certified labs within the region. We do not have “proprietary methodologies” but translate the models and principles most accepted by the scientific community to answer everyday questions. We only use replicable and auditable methods; our tools are electroencephalography (EEG), Event Related Potentials (ERP) and Implicit Association Tests (IAT). (in Spanish)

Diplomado en Neurociencia del Consumidor y Mercadotecnia Basada en Decisiones en la UP

La Universidad Panamericana, a través de su departamento de Programas Especiales en cooperación con SEELE Neuroscience, ofrece por primera vez el diplomado en Neurociencia del Consumidor y Mercadotecnia basada en Decisiones, ¡seis meses antes del lanzamiento del libro con el mismo nombre! Una primicia que sin lugar a dudas representará una ventaja significativa para los asistentes quienes obtendrán de primera mano el conocimiento, serio, formal y reconocido por la comunidad científica internacional del estudio del cerebro humano para explicar las decisiones del consumidor.

En SEELE Neuroscience estamos muy orgullosos de poder coopera con la Universidad Panamericana para ofrecer conocimiento actual, confiable y de utilidad máxima para los asistentes.

Más información aquí.

Why did surveys fail to reflect people’s choice? A dive into brainwaves has some insight: we are doing the wrong questions.

Disclaimer: Despite the results we share in this article, we at SEELE Neuroscience are not claiming we predicted or anticipated Trump’s victory. This is the report of a scientific study on the usefulness of understanding the underlying processes behind self-reporting a decision.

Georgian 80’s rock band R.E.M. had their “It’s the end of the world as we know it” hit in 1987, but instead of what the lyrics should suggest, we are far from feeling fine. Brexit surveys failed last June with margins far from all statistical errors and now, the 45th president of the United States wins with a margin absolutely no survey nearly forecasted. It is no news for anyone that we are doing something wrong. Is not a matter of sampling or even of statistical models, it is something simpler and more human: the willingness to express our decisions.

The “hidden vote”

Last year we performed for our customer Neuropolítika the now famous experiment where the same sample reported extremely different values between their declared vote intent for a candidate and what we call “neural vote” a fancy term for a brain association index based on EEG interhemispheric coherences. While our neural vote with a sample n=98 predicted a victory of the actual winning candidate with an error of 2.6%, what was more notable was that the same sample expressed a declared intent 14% bellow the actual voting results.


It was clear that certain part of the sample, whether the brainwaves said something, the verbal expression of the intent muted. Survey experts have called this part of the population that does have a vote intent but it is not willing on declaring it as the hidden or undisclosed voter.

With this background, last year we decided to measure and compare the declared perception of the most relevant candidates for the presidency of the United States, but in a very specific context: with Mexican population in our country. It is no secret that Trump’s campaign was heavily aimed to the south, especially under the script of building a great wall to avoid the incoming of more rapist, drug dealers and so. Mexican surveys claimed a generalized hate to Donald Trump’s speech while also a generalized acceptance of Hillary Clinton was on the rise. The perfect scenario to compare what a population says with what really think.

The experiment to dive into undeclared responses

So, when we talk about “neural vote”, what bogus claim can this be? Absolutely none, we promise. We are talking about our version of the very well-known Implicit Association Test but with an EEG component. The main problem with IAT is that it relies on reaction time, a variable that raises more questions than answers. What we do is use the paired-item model, where we present to each subject an item with a picture and a word or statement. The presentation of the item is for 5 seconds to give the subject to decide if the word describes the picture. If so, the subject must press a trigger button as fast as possible when the attention cross appears. The trigger button and the EEG system collect simultaneously the brain activity of the subject. The design looks more like this:


Under this setting, subjects are trained with items of explicitly-true associations, let us say, the picture of a cat and the word “Cat”, the picture of the sun and the word “sun”. Explicitly-false associations are alternated such as the picture of a Mexican flag and the word “Flag of Australia”, the picture of the moon and the word “chicken soup”.

Those familiar with BCI interfaces have already detected that what we are doing here is generate enough samples of brain activity when the subject needs to decide to push the trigger when faced to a explicitly-true or a explicitly-false association. The data what we use is the EEG power-spectrum hemispheric contralateral coherence for the bandwidth from 4hz to 12 hz segmented In ranges of 4hz: 4-8, 5-9, 6-10, etc.


Once we collect enough samples to identify patterns for both of the associations, we continued the trial with the pictures of Donald Trump and Hillary Clinton, each paired with different phrases the political experts suggest as useful statements to measure vote intent. These were: “I sympathize with”, “Represents me”, “I would never vote for”, “I would bet one month of my wage to his (her) triumph”, “I want him (her) to win the presidential election” and also each of the candidates name.

We performed the study last December 2015 with a sample of 98 politically active citizens such as militants, journalists, and consultants. The results were analyzed to establish an EEG association index, which is the proportion of similarity from 0 to 1 between each experimental item (such as the picture of Hillary Clinton with the phrase “Represents me”). One amazing finding was the predictive value of such EEG index when related to the proportion of sample that in fact pressed the trigger for each item:


The declared responses and the EEG index fitted a linear model with an adjusted R2=0.74, normality and constant variance test passed and a power with alpha 0.05:1. Each dot represents an experimental item.

What we learned about the Mexican “hate” to Donald Trump

Despite the wave of reactions, Internet memes and surveys reflecting a declared and factual hate to the new president of the United States, the results of our study revealed some disturbing realities. First, here are the results of the study:


The most evident insight is that Mexicans, back in December 2015, were aware of the high odds of Trump’s triumph, despite the very low sympathy detected. Moreover, Clinton’s results are least surprising: her higher EEG association index was with the phrase, “I sympathize with her” but was poorly associated with triumph perception.

With this results we really see a great opportunity to use neuroscientific tools to revisit the current way of exploring the decision making process. One way we are using this EEG association index, away from the “neural vote”, is to validate survey items before the implementation in large samples. In this way, we verify some key aspects of the items such as their sensibility to priming or likeliness of obtaining a skewed response.

About SEELE Neuroscience:

We are the leading lab in Latin America  specialized in translational neuroscience for the private sector with more than 10 years of expertise and six certified labs within the region. We do not have “proprietary methodologies” but translate the models and principles most accepted by the scientific community to answer everyday questions. We only use replicable and auditable methods; our tools are electroencephalography (EEG), Event Related Potentials (ERP) and Implicit Association Tests (IAT). (in Spanish)