A new study regarding personality types was just published and I was asked to comment on it with the Washington Post: https://www.washingtonpost.com/science/2018/09/17/scientists-identify-four-personality-types
Pretty wild stuff! It has even been reposted/reblogged by other news orgs, like the Chicago Tribune. I’m pretty excited–it’s the little things…
Have you ever clicked on a link like “What does your favorite animal say about you?” wondering what your love of hedgehogs reveals about your psyche? Or filled out a personality assessment to gain new understanding into whether you’re an introverted or extroverted “type”? People love turning to these kinds of personality quizzes and tests on the hunt for deep insights into themselves. People tend to believe they have a “true” and revealing self hidden somewhere deep within, so it’s natural that assessments claiming to unveil it will be appealing.
Aspsychologists, we noticed something striking about assessments that claim to uncover people’s “true type.” Many of the questions are poorly constructed – their wording can be ambiguous and they often contain forced choices between options that are not opposites. This can be true of BuzzFeed-type quizzes as well as more seemingly sober assessments.
On the other hand, assessments created by trained personality psychologists use questions that are more straightforward to interpret. The most notable example is probably the well-respected Big Five Inventory. Rather than sorting people into “types,” it scores people on the established psychological dimensions of openness to new experience, conscientiousness, extroversion, agreeableness and neuroticism. This simplicity is by design; psychology researchers know that the more respondents struggle to understand the question, the worse the question is.
But the lack of rigor in “type” assessments turns out to be a feature, not a bug, for the general public. What makes tests less valid can ironically make them more interesting. Since most people aren’t trained to think about psychology in a scientifically rigorous way, it stands to reason they also won’t be great at evaluating those assessments. We recently conducted series of studies to investigate how consumers view these tests. When people try to answer these harder questions, do they think to themselves “This question is poorly written”? Or instead do they focus on its difficulty and think “This question’s deep”? Our results suggest that a desire for deep insight can lead to deep confusion.
Confusing difficult for deep
In our first study, we showed people items from both the Big Five and from the Keirsey Temperament Sorter (KTS), a popular “type” assessment that contains many questions we suspected people find comparatively difficult. Our participants rated each item in two ways. First, they rated difficulty. That is, how confusing and ambiguous did they find it? Second, what was its perceived “depth”? In other words, to what extent did they feel the item seemed to be getting at something hidden deep in the unconscious?
Sure enough, not only were these perceptions correlated, the KTS was seen as both more difficult and deeper. In follow-up studies, we experimentally manipulated difficulty. In one study, we modified Big Five items to make them harder to answer like the KTS items, and again we found that participants rated the more difficult versions as “deeper.”
We also noticed that some personality assessments seem to derive their intrigue from having seemingly nothing to do with personality at all. Take one BuzzFeed quiz, for example, that asks about which colors people associate with abstract concepts like letters and days of the week and then outputs “the true age of your soul.” Even if people trust BuzzFeed more for entertainment than psychological truths, perhaps they are actually on board with the idea that these difficult, abstract decisions do reveal some deep insights. In fact, that is the entire idea behind classically problematic measures such as the Rorschach, or “ink blot,” test.
In two studies inspired by that BuzzFeed quiz, we found exactly that. We gave people items from purported “personality assessment” checklists. In one study, we assigned half the participants to the “difficult” condition, wherein the assessment items required them to choose which of two colors they associated with abstract concepts, like the letter “M.” In the “easier” condition, respondents were still required to rate colors on how much they associated them with those abstract concepts, but they more simply rated one color at a time instead of choosing between two.
Again, participants rated the difficult version as deeper. Seemingly, the sillier the assessment, the better people think it can read the hidden self.
Intuition may steer you wrong
One of the implications of this research is that people are going to have a hard time leaving behind the bad ideas baked into popular yet unscientific personality assessments. The most notable example is the Myers-Briggs Type Indicator, which infamously remains quite popular while doing a fairly poor job of assessing personality, due to longstanding issues with the assessment itself and the long-discredited Jungian theory behind it. Our findings suggest that Myers-Briggs-like assessments that have largely been debunked by experts might persist in part because their formats overlap quite well with people’s intuitions about what will best access the “true self.”
People’s intuitions do them no favors here. Intuitions often undermine scientific thinking on topics like physics and biology. Psychology is no different. People arbitrarily divide parts of themselves into “true” and superficial components and seem all too willing to believe in tests that claim to definitively make those distinctions. But the idea of a “true self” doesn’t really work as a scientific concept.
Some people might be stuck in a self-reinforcing yet unproductive line of thought: Personality assessments can cause confusion. That confusion in turn overlaps with intuitions of how they think their deep psychology works, and then they tell themselves the confusion is profound. So intuitions about psychology might be especially pernicious. Following them too closely could lead you to know less about yourself, not more.
A short update: After accidentally, and boneheadedly, deleting my website’s database (right before I did a backup), I can safely say that the majority of the former website, with updating edits, has been restored.
By far the worst thing was restoring blog posts. Unfortunately, I waited a little too long to utilize Google’s cached version, which had all my posts archived, so I had to use the Wayback Machine (side note: this site is AWESOME). They only had one page of updates, but I’m pretty sure it was my 10 most recent posts (so a couple of really early Blog posts are now lost in the ether). Now, you might be saying: Alex, why did you update old blog posts? Well, I dunno, posterity’s sake? It was a part of my academic life, so why not? Obviously I wouldn’t do it if web caches didn’t exist, but since they do and it was easy to implement it.
Anyway, I’m going to try to keep this blog updated with the comings and goings of my academic life, so stay tuned!
It’s been a while since I actually made a post on my website updating my academic life. Well, since the education and technology class has finished, that’s pretty much what I am back to. We’ll see if I can keep up.
Anyway, this summer it looks like I’ll be teaching two classes in the Psychology Department: Health Psychology and Intro to Research Methods again. I’m excited to develop a student-focused Health Psych class from the ground up. I am also excited to re-tool and revamp my research methods class from last year. I’m probably going to use a new book which I believe might be better and more approachable.
However, the best part of teaching research methods again is the ability to implement part of the education and technology final project. Take a look at the video my colleague, Molly Metz, and I made below (it’s intentionally silly):
So we can’t really implement the personalized adaptive learning platform in 6 weeks, but I can implement and integrate the ZAPS portion (or something like it) into the class in that time, just to make sure the students know what the class is about, as well as understanding the need for psychological science at such an early part of their college career. We will pretest attitudes and interests, move through the ZAPS process, finishing up with a small paper and a posttest of attitudes and the like, then compare the pretest and posttest for any changes. Hopefully there’s a publication in there somewhere (Teaching of Psychology seems like the appropriate place, no?). And more importantly, hopefully the new perspective in this type of course will lead to better prepared students in the upper-division classes and lab classes at UCSB. One can only hope that becomes truth.
My health psych class won’t be as technologically advanced, but I do hope to get the students interested in health psych by having them participate in a health behavior change assignment for the 6-week session. College students are full of bad habits, so maybe a few of them will continue to change their behavior after the course is completed. Showing them real studies with important health implications is also important–my goal is to only use the book as a support, not a complete resource for the course. I find this boring and predictable.
There is apparently lot’s of work to be done in the next couple of months, since both classes are the first session of summer school! And then a trip to Berlin for a conference! 2013 is one heckuva year!
For this week’s readings, I saw a common theme explored: Is educational technology worth implementing in a new educational setting, and if it is worth it, what are the expected and tangible benefits?
To begin, a chapter by Hooper and Rieber (1995) explored the adoption of technology in a classroom setting, discussing 5 steps for it to be an effective use of technology. Granted, this analysis was done in the mid-90s, so there wasn’t really an Internet to speak of–at least not the way we see it now. However, they break down the 5 steps rather well; it was especially helpful to compare the 5 steps in full actualization of new tech involvement vs. traditional implementation. The 5 steps are: Familiarization, Utilization, Integration (which comprises the traditional view), Reorientation, and Evolution. The first two steps are critical of course, and it speaks to the discussion we had in class a couple of weeks ago that includes professional development courses for teachers and continuing education workshops for higher education instructors. The first two, in my mind, are the roadblocks in a large education system. Sure, if the technology is small and consumer-ready, the teacher might have the means to begin the process; however, if it is large and cumbersome, then familiarization and utilization will be quite low. The last two steps/phases (Reorientation & Evolution) require more than just implementation in the course (the 3rd step)–it requires a change in thinking within the teacher. A very tall order. In addition to this, the implementation needs to lead to structure and process change (Evolution) in order to remain relevant.
In a similar vein, Breslow (2007) discusses several studies specifically at MIT and organized them into 3 conclusions: (1) successful educational technologies met a specific educational need previously unmet by traditional methods, (2) too much or ineffective technologies can be detrimental to learning, & (3) there are important relationships between technologies and their respective learning environments in they exist. I’d like to focus on the first 2 conclusions. First, I see parallels with our term project and the idea that specific challenges should be met with as specific a technology as possible. There really is no benefit to broad strokes solutions, since the introduced technology might not specifically address an issue. That is not to say there isn’t myriad technological solutions for a given educational challenge, but it does help to use a scalpel instead of a cleaver in as many situations as possible. If the cleaver is used, then this could lead to the second conclusion by Breslow, that of ineffective or detrimental technology in the classroom. If the tech isn’t helping, it is obviously misused, misunderstood, or misplaced. In this instance, a more tech-free evolution should take its place.
The message I wish to convey is that technology can be good, but it should be thoughtfully used to solve an educational challenge, whether it be K-12 or higher education. We all have our qualms about traditional education and the failure of lecturing. So there needs to be tangible benefits for use of the technology (such as in Mabry & Snow, 2006) for the implementation to be worth it.