These findings have an interesting implication for judgment and decision-making. Specifically, they suggest that reliance on System 1 (our automatic, emotion-based responses) versus System 2 (our more deliberate, rule-based responses) can be different at the individual-level. This means that how much weight a person puts into their System 1 response (emotions) can be predicted by whether they are a heart-locator or a brain-locator. Unsurprisingly, individuals who are heart-locators are more likely to rely on their emotions when making decisions, while brain-locators are more likely to rely on reason (see: Fetterman & Robinson, 2013). While we all have both System 1 and System 2 responses, and this can be affected by external factors, it's also interesting that a simple dichotomy of self-location can identify an internal factor that makes an individual more or less likely to rely on System 1. As Christian Jarrett points out in his blog post: you can learn a lot about a person and how they will respond to things just by asking them if they feel more located in their hearts or their minds.
Image source: here A recent blog post at New York Magazine's Science of Us discusses research on how self-location (whether you consider yourself more located in your heart or your brain) can reveal important personality characteristics. Importantly, a recent study published by Adam, Obodaru, and Galinsky (2015), found that whether you consider yourself a heart-locator or a brain-locator has consequential downstream effects as well. Specifically, the researchers found that the location of your self-essence affects how you define the start and end of life and can determine what charities you are most willing to donate time and money to (e.g., if you are a brain-locator you are more likely to support Alzheimer's research than the American Heart Association).
These findings have an interesting implication for judgment and decision-making. Specifically, they suggest that reliance on System 1 (our automatic, emotion-based responses) versus System 2 (our more deliberate, rule-based responses) can be different at the individual-level. This means that how much weight a person puts into their System 1 response (emotions) can be predicted by whether they are a heart-locator or a brain-locator. Unsurprisingly, individuals who are heart-locators are more likely to rely on their emotions when making decisions, while brain-locators are more likely to rely on reason (see: Fetterman & Robinson, 2013). While we all have both System 1 and System 2 responses, and this can be affected by external factors, it's also interesting that a simple dichotomy of self-location can identify an internal factor that makes an individual more or less likely to rely on System 1. As Christian Jarrett points out in his blog post: you can learn a lot about a person and how they will respond to things just by asking them if they feel more located in their hearts or their minds.
0 Comments
Image source: http://www.deviantart.com A couple weeks ago, San Francisco 49ers linebacker, Chris Borland, announced his retirement from the NFL after one season (and at the age of 24). Borland was ranked as one of the top rookies of the year, and his retirement represents a huge blow to the 49ers. But the reason why Borland decided to quit is the really interesting part of the story. In an interview with ESPN, Borland said he was quitting because of concerns over the long-term effects of head injuries. Borland continued, "[f]rom what I've researched and what I've experienced, I don't think it's worth the risk." Why is this so interesting? Well, we know from research on hyperbolic discounting and intertemporal choice that most people are incredibly present-biased: they prefer the present to the future and they focus on short-term outcomes or gains at the cost of long-term outcomes or gains. Also, with regards to head injuries, many times the worst health outcomes can be caused by the many small hits that linebackers take over their career. This can lead to another cognitive bias driven by mental accounting known as the adding up effect: we have a hard time seeing how many small things add up to larger problems over time. Finally, research on risk and temporal construal has demonstrated that removing a risk in time results in it being less aversive - in other words, people are usually willing to take on more risk if it happens in the distant future than if it happens in the near future. This means that Borland made quite an extraordinary decision: he chose the future over the present, he saw how many small outcomes added up to one large negative outcome, and he weighted future risk the way he would weight current risk. Those are three very large biases to overcome. It's also a decision that many players in his same position would never make (or potentially even consider). As one player said in response to Borland's decision: "No offense to anyone but I'm playing until I can't anymore. I love this game to [sic] much. " I would guess that many players feel the same way (not to mention the incentives that come along with continuing to play). What I want to know is how Borland was convinced to give up millions of dollars and fame (now - and potentially later), for his long-term health and increased longevity (in the future)? In describing the decision the article says, "[a]fter the season, Borland said, he consulted with prominent concussion researchers and former players to affirm his decision. He also scheduled baseline tests to monitor his neurological well-being going forward 'and contribute to the greater research.' After thinking through the potential repercussions, Borland said the decision was ultimately 'simple.'" Whatever techniques his family, friends, and researchers used to convince Borland to make the right choice for the future, they are techniques that could be ultimately valuable to anyone facing intertemporal tradeoffs between the want and the should (things that give us short-run pleasure at the cost of long-term gains or that come with high long-term costs). Just to get an understanding of the hits players take and how they affect visual memory and cognitive impairment, I found the infographic below. It's shocking how many hits NFL players experience in a season (over 1,000 for linemen and linebacker). Especially since the most impairment was seen in those players taking a lot of middling hits (because of the representativeness heuristic players may expect only big hits to have big consequences, but in reality it's all the smaller hits that they probably don't even pay as much attention to that can lead to the deterioration of brain tissue and cognitive impairment). Image source: The Globe and Mail (accessed via visual.ly here.
Image source: Rewire Me In his book, Thinking Fast and Slow, Daniel Kahneman describes some of the many consequences of relying on System 1 thinking (or our intuitive, associative, and emotional thinking). There are many instances he cites where relying on our more deliberative, effortful, and rational System 2 would attenuate cognitive biases and result in more optimal decision-making. In covering some of the topics related to mental shortcuts and the biases caused by relying on System 1, a student in my BEDM course asked whether mindfulness could be a way to actively engage System 2. In the course of our conversation, this student also sent me two HBR articles discussing mindfulness.
One article, titled "Mindfulness Mitigates Biases You May Not Know You Have," talks about research showing that mindfulness can reduce implicit and associative biases. Implicit biases and attitudes are measured using the IAT (Implicit Attitudes Test - you can try taking one yourself here) and are automatic or associative thoughts and feelings we have about a target. Usually these attitudes have to do with prejudices or stereotyping, such as our feelings about certain races or sub-populations. The research discussed in this article specifically looked at implicit attitudes toward race and age, and found that participating in a 10-minute mindfulness exercise before taking the IAT reduced implicit bias towards these groups. These differences were significant, though the effect sizes were not exceptionally large. This study provides further evidence that engaging in mindfulness can reduce reliance on associative processing, and thereby diminish harmful implicit biases - something that could be especially valuable in the workplace, where hiring decisions and employee interactions may be harmfully impacted by automatically induced prejudices and biases. The second article, titled, "There are Risks to Mindfulness at Work" takes a completely different viewpoint. In this article, the author, David Brendel, argues that mindfulness can actually be misused as a strategy to avoid critical thinking. In other words, individuals may use mindfulness as a way to avoid critical thinking and making tough decisions, instead using mindfulness meditative strategies to disengage from the task and avoid the decision altogether. The problem here seems to be using mindfulness as an approach to dealing with all types of stress. When it comes to feelings of anxiety or burnout, I agree that mindfulness can be a helpful and simple approach for some individuals. But, engaging System 2 (more more effortful thought processes) can also increase feelings of stress and cognitive strain. These are not bad feelings - rather they are the normal feelings that accompany more critical and rational thinking - and they shouldn't be avoided (that is what relying on System 1 already does for us). If mindfulness is being used as a way to get more in touch with System 2, then it is a sound approach to decision-making; but if it is yet another strategy that individuals engage in to be "cognitively lazy" and avoid the strain and discomfort of System 2 thinking, then it should be avoided (and definitely not forced upon people as has become a practice in some workplaces). Ultimately, I think both articles highlight a potential tactic for reducing associative thinking that can be positive if applied correctly, and potentially negative if used incorrectly (as an avoidance tactic rather than a way to truly engage with difficult mental tasks). As mindfulness becomes more prevalent in our society and the workplace, I think it is important to keep in mind that cognitive strain isn't always bad, and that any tactic that moves you away from deliberative thinking can be harmful in arenas where such thinking is necessary or required. Thank you to Karen Hübert for sharing! Image source: www.buzzfeed.com On Monday, Tinder announced Tinder Plus, which charges for premium features. Usually the introduction of a premium version of an app isn't exactly big news, but the company made a lot of interesting decisions regarding this paradigm shift. Raising prices in general is a difficult task -- people find it unsavory paying for something that was once free (see: loss aversion). But, with the right behavioral economic principles, a company can effectively raise price without concurrently causing outrage. Tinder is not such a company. Tinder has not only started charging for features that used to be free, but it has also decided to employ a tiered pricing program based on age (and it is using a rather arbitrary age cutoff at that). Tinder will be charging different prices to users who are aged 30 and older ($19.99), and to those who are under the age of 30 ($9.99). Tinder justified this pricing decision with the following statement to Bloomberg: "'Lots of products offer differentiated price tiers by age, like Spotify does for students, for example,' Rosette Pambakian, a spokeswoman for Tinder, wrote in an e-mail. 'Tinder is no different; during our testing we’ve learned, not surprisingly, that younger users are just as excited about Tinder Plus, but are more budget constrained, and need a lower price to pull the trigger.'"
The first mistake Tinder made was initially charging nothing (thus, anchoring users on a free price point, and "free" holds special meaning to consumers). The second mistake it made was raising the price without directly tying it to only the addition of new/special features. The company should have left all features that were currently free as part of the free version, and only started charging for new or additional features (thereby justifying the price increase). The third mistake, and perhaps the company's worst, was charging more to older users for no good reason. It cites student discounts, but it should have then just had a discounted price relative to a standard base price that was the higher (30-and-above) price. People think student discounts are fair - we all understand that students are an income-constrained population and we generally see discounts to that group as acceptable. Plus, if this was really what Tinder was doing, the cutoff in age would be lower than 30 and they could arguably make even more profit this way. Since they didn't have pricing based on student membership, it seems that they are just unfairly "punishing" older users. Perhaps the company is trying to discourage people over 30 from using its app (maybe trying to force older users to its other platforms, okCupid and Match). So, either they are employing a strategy to get rid of older users, or they are completely out of touch with principles of fairness as they relate to consumer pricing. Either way, Tinder could have handled this announcement a lot better, and probably would find itself making a greater profit if it embraced social norms related to fairness. In general, it seems that people are very averse to the idea of price discrimination, especially when it is based on a demographic variable. This isn't to say that companies don't get away with it -- airlines, car services, and booksellers all engage in price discrimination everyday, and we consider volume-based pricing a standard tactic. The difference is that these are policies that consumers can "select into" and thus feel are more fair than policies that are based on something that consumers have no "control" over. If Tinder wanted to price discriminate, it should have found another way to do so, or, again should have reframed the issue (the standard price is $19.99 but students can receive a discounted price of $9.99). A quick review of headlines related to this announcement suggests that the public and Tinder users find this new pricing policy outrageous and unfair (which I think any of the students in my Behavioral Economics and Decision-Making class could have predicted):
Special thanks to Eliza Coleman for sharing! Image source: pusheen.com Yesterday, I came across this article in New York Magazine. The article summarizes a study, published in Personality and Individual Differences, which found that posting more selfies online is significantly correlated with psychopathy and narcissism, while time spent online is correlated with measures of self-objectification and narcissism in men. I'm not sure why the study focused specifically on men instead of both genders, but I'm sure the findings would be similar across genders (at least in my lay experience). However, what interested me more was the concept of self-objectification, a concept that I had not come across before. Self-objectification, in my review of the literature since, well, yesterday, is the internalization of an external observer's perspective as the primary view of one's self. In other words, physical appearance is prioritized as a signal of self-worth, since outward appearances are the most easily observed by outsiders. Past research has found that this is linked to depression, sexual dysfunction, and eating disorders in women (unsurprisingly). Little research has looked at whether self-objectification is present in men and the effects that self-objectification may have on male members of society. It seems likely that self-objectification could affect men as well as women, though perhaps to a lesser extent (or to an equal or greater extent in certain sub-populations). Thus, this new study is contributing to objectification theory by showing a real-life behavior that is correlated with self-objectification (time spent on social-networking sites). Still, it seems like it would be just as interesting to study the behavior of women online as well and to see if there were gender differences in personality traits correlated with the number of selfies and time spent online (rather than focusing on one gender and leaving us to guess as to whether these findings would be similar or different for females).
It seems to me that self-objectification would be a problem in any society that values external appearance. But why then are some people more susceptible to it than others? And how do people respond to self-objectification? If selfies are a signal of self-objectification, how do others treat people who post more (versus less) selfies (a sort of meta-self-objectification if you will)? I'm clearly not a big fan of the heavily curated nature of social networking sites, but at the same time, I can't help but feel sadness for those who have started self-objectifying. Ultimately, there will come a time when no number of filters or photo-manipulation will make those selfies acceptable, and what happens then? What happens after years of cultivating an external appearance at the cost of other pursuits whose benefits are not displayed outwardly? Image source: Good Day Goldfish Recently, Facebook had to apologize to its users for its "Year in Review" product, which reminded many users of tragic, sad, or otherwise upsetting events in the past year that would be better left forgotten (or at least not blasted in a newsfeed couched in exclamation points). One journalist commented on the fact that Facebook's default is positivity: Facebook cultivates and encourages positive events only. For example, comments with the word "congratulations" in them get pushed to the top of newsfeeds based on Facebook's algorithm, and you can only "Like" posts, you can't "Dislike" them or easily offer condolences or commiseration. This led me to think about the potential harm Facebook causes by forcing a default of positivity on its users. We all know that our lives are not filled with purely joyful events and that these are not the only types of milestones we want to share either. Often, people seek out support and empathy for negative events and need this support more than they do for positive events (of course, this can be taken too far as well). And because of the ecosystem Facebook has created, people often feel that they have to cover their posts in a thick lacquer of happiness and success, which is not only partially or completely false, but also makes people feel ashamed of anything less than perfectly positive (emphasis on the perfect part).
Years of research have shown that negative emotions are productive and make us better people. Denying the existence of negative emotions can also hinder our ability to feel and appreciate positive emotions (for an incredibly interesting and engaging discussion of this, see here). If we think about Facebook in this larger context, it would suggest that the social media site is doing more harm than good by trying to make us focus on only the positive aspects of our life and denying or hiding the less than perfect parts. In reality, this is probably driving a wedge between people, rather than bringing them together, which is, ostensibly, the goal of the website. Evidence to suggest this comes from a recent study on the positive emotional benefits of listening to sad music, which found that sad music can provide consolation and help people regulate their negative emotions. It's possible that more realistic Facebook posts could do the same, offering benefits to both the people reading the post, and the poster him/herself. Facebook users have been asking for a "Dislike" button for years, but the site has never complied. So, why, if negative emotions are good for connection, emotion regulation, and overall well-being, would Facebook distance itself from any form of negativity or negative emotional sharing? It obviously believes that negativity would do more harm than good -- this could be both in a strictly monetary sense (ad revenues) or in terms of the overall distrust it has in its users' ability to engage in "disliking" in a sensible and respectful way. I do think that a lot of good could come from Facebook embracing a more "complete" emotional profile of its users, or at least not forcing them to be positive even if it won't provide an explicit outlet for them to be negative. Image source: Shel Silverstein via Asher Days blog When I first started my dissertation, I came across an old Association for Consumer Research (ACR) address by Ivan Ross about the different types of risks that consumers face. One risk in particular -- opportunity lost risk -- caught my attention. This risk has received little attention in the literature, but I consistently find myself thinking about it and its prevalence in a society that has become overwhelmed by new product opportunities and constant technological upgrades. Social media has even led to the coinage of the term "FOMO" (Fear of Missing Out), which is anxiety caused by the fear of missing out on social interactions, potential experiences, or other such events. Opportunity lost risk is related but different - FOMO is socially defined, while opportunity lost risk is about the fear of missing out on a better opportunity by taking (or not taking) a certain action (that is at least how I think about it - it has not been well-defined in my review of the existing research).
I started thinking about opportunity lost risk again after reading a recent article in Entertainment Weekly about the making of Gone with the Wind. In the article, they discuss the producer, David Selznick's, perseverance in the face of many obstacles and repeated failures while making the film. In the article, the writer, Chris Nashawaty, says about Selznick, "But like all gamblers, he lived in a constant state of fear that someone else might rake in a pot that he felt rightfully belonged to him." This is an interesting perspective on gambling as most of the research on financial risk-taking has found that people are extremely risk averse because they fear losing or the regret associated with taking a gamble and subsequently losing. But opportunity lost risk suggests that some people may focus on not winning a prize that they could win by taking the gamble - a sort of inverse loss aversion (the loss of a gain looms larger than potential losses). So when do people focus on the gains they might miss out on versus the losses they may incur by not taking a risk? My suspicion is that it may be a combination of individual factors and contextual cues. Whatever the cause, it is likely becoming more and more important in our increasingly entrepreneurial economy and our highly turbulent product marketplace. Image source: Lovelyish I'm a big fan of "things you learn in your [insert age-decade here]" lists. There's something very comforting about someone -- even someone you don't know or who has little expertise on the subject other than having lived through it -- telling you that all the things you worry about and all the things you dislike about yourself will become meaningless as you get older. This particular article in the NYTimes was enjoyable because it's prospective (for me at least), rather than retrospective. In other words, perhaps I can take these lessons to heart now rather than reading the list and checking off all the items I did indeed learn when I was 20. But then again, maybe you can't learn certain lessons until you reach the appropriate age -- there are surely things I wish I knew when I was in my 20s, but I don't think my 20-year-old self would have listened to a single thing future-Liz had to tell her.
A sub-list of my favorite lessons from the article:
Image source: paper boats in puddles Yesterday, I was lucky enough to sit in on Leif Nelson's journal club at UC Berkeley's Haas School of Business. The article for the week was a forthcoming article in the Journal of Experimental Psychology: General by Andrew Meyer and Shane Frederick (with a bunch of others). In the paper, the authors thoroughly and concisely demonstrate that the effect of disfluency (as manipulated through font) on analytical reasoning is a false positive. After pooling the results of sixteen studies, the authors "find no evidence of a disfluent font benefit under any conditions." To review, the original finding suggests that reducing the clarity of font for a question or task can lead to more analytical or deliberate thought. In turn, this improves performance on counter-intuitive math problems (from the Cognitive Reflection Task or CRT). What I find especially interesting about the determination that this finding is nothing more than a false positive, is that the original finding has so much intuitive appeal and was so easy to manipulate (and potentially apply). For this reason, anyone, psychologist or not, would probably agree with the postulation that making something more difficult to read should make people think more carefully about the problem at hand. This poses an interesting problem, as Leif noted in his discussion: will the original paper and findings still act as a placeholder for an idea that makes intuitive sense and which people think should work? More generally, how do people respond to a false positive finding when the original finding is intuitively appealing versus counter-intuitive?
The Meyer, Frederick et al. paper also brings up an important lesson. Ultimately, the authors found that the original disfluency finding was the result of just one of the three CRT questions (the widget problem, for those who are interested), and that it was a movement in the control group rather than the treatment group. This highlights the need to confirm that an effect is happening where it should be happening (in response to the treatment rather than movement in the control) (this point was made nicely by Uri Simonsohn in the journal club meeting). Hopefully, researchers will move away from the use of font as a fluency manipulation in the future and this idea, no matter how intuitively appealing, will be appropriately discounted. Image source: The DNA Life (artwork by Fong Qi Wei) In an opinion piece at the NYTimes, Jay Belsky, a professor of Human Development at UC Davis, argues that funding and developmental programs for children could be more effectively allocated based on genetic information. Belsky cites research from a forthcoming issue of the journal Development and Psychopathology that provides evidence that children show varying levels of susceptibility to their environments and stress, and that this variability in response can be traced to specific genes. Thus, children with one type of gene are orchids--highly sensitive to their environments, and more responsive to interventions--while children with another type are dandelions--less susceptible to environmental factors as well as less responsive to interventions. For this reason, Belsky labels resilience a double-edged sword: children who are resilient are not as adversely affected by negative or unsupportive environments, but they also don't see the same boost from developmental programs aimed at improving life outcomes. While this is, in and of itself, an interesting and thought-provoking finding, Belsky takes it one step further by suggesting that we should use DNA sequencing to identify and target children that will be more responsive to interventions - arguing both in terms of efficiency and ethicality. This recommendation is qualified by the need for additional research and technology, but he envisions the following:
"One might even imagine a day when we could genotype all the children in an elementary school to ensure that those who could most benefit from help got the best teachers. Not only because they would improve the most, but also because they would suffer the most from lower quality instruction. The less susceptible — and more resilient — children are more likely to do O.K. no matter what. After six or seven years, this approach could substantially enhance student achievement and well-being." Of course, this is a bit of a slippery slope. While Belsky is not a proponent of abandoning the resilient children altogether, it is hard not to see this as the most likely outcome. There is a sense of "well, the dandelion children will do fine no matter what, so what difference does it make?" Belsky suggests that other forms of intervention can be found for these types of children, but in a society that seemingly values cost-effectiveness and efficiency above all else, the quickest, cheapest fix is to genotype all children, put the orchids with the best teachers and the most funding/programs, and put the dandelions with the sub-par teachers and less funding/programs. Then, ultimately, everyone will come out in the middle. I don't want to harp on Belsky too much because I do think he makes some good points, and he tries to argue that it shouldn't be just about saving money, but there is still something unsavory to the idea of using a person's resilience against them. Especially, when we have no idea what the limits to that resilience may be. |
Archives
July 2015
AuthorSharing my thoughts on things that interest me. Categories |