Did you know that the authors of Make Your Job a Calling also have a great guest blog gig over at Psychology Today? It’s called Vocation Vocation Vocation and it offers a great mix of wisdom regarding calling and fun pop culture references. We occasionally cross-post some of the content over here, but you should really head over to the Psychology Today site and bookmark it to make sure you don’t miss anything.
Posts By Templeton Editor
Web-Slinging and World-Changing: Career Guidance from Spider-Man, Martin Luther, and a Hospital Janitor
Check out Bryan J. Dik’s wonderful address at the University of Dubuque. It was part of a recent program called the Wendt Character Initiative. With a title like “Web-Slinging and World-Changing: Career Guidance from Spider-Man, Martin Luther, and a Hospital Janitor,” how can you resist?
Why Does Leslie Knope Love Her Job?
This post first appeared on Acculturated.com
by Ryan Duffy
Imagine you work at a job where your boss only cares about himself and openly dumps all of his work onto you. Your primary job tasks involve responding to citizen complaints and organizing community projects that more often than not go unnoticed. Oh yeah, and your coworkers consist of a well-meaning idiot, a selfish brat, a status-driven wannabe entrepreneur, a clumsy kiss-up, and an exercise fanatical perfectionist, among others. When you finally get a promotion to your dream job, one fellow coworker bribes you to get your new office and another makes you eat a bite of their Caesar salad and then tries to kiss you. Yet at the end of each day you (almost) always feel thrilled about your job.
For the approximately 3.5 million of us who have hung in there with Parks and Recreation over the last five seasons, the person described above is immediately recognizable: the positive, loveable, hardworking, selfless Leslie Knope, played expertly by Saturday Night Live alum Amy Poehler. For the first four seasons of the series, Leslie was the deputy director of the parks department in a small, fictitious town in Indiana called Pawnee, and at the end of the fourth season gets elected to city council. In the face of bad bosses, weird coworkers, and demanding and unappreciative citizens, Leslie somehow manages to truly love what she does. This is because, in “psychological speak,” Leslie approaches her work as a calling.
Psychologists have recently taken a keen interest in studying what it means to have and live out a calling. A calling is different from a job or career–it is something that a person feels pulled to do, is a central part of their life meaning or purpose, and is specifically used to help others. As deputy director of the parks department or as a city councilwoman, Leslie is defined by her job. As someone who constantly works to make Pawnee healthier or gives birthday celebrations even for her crotchety boss, Leslie is always helping others by her work or in her workplace. Research has shown that people like Leslie who live out their calling are more committed to, and happier with, their jobs and are also happier in life. In jobs that are particularly challenging like Leslie’s, having a calling may function as a protective mechanism–helping people stay motivated and engaged because they know this is they job they are supposed to be doing.
People who work in a mostly thankless job and/or in a workplace that is a few steps away from healthy might be able to learn a thing or two from Leslie. If we too want to reap those benefits of living out a calling, it may be less about choosing a perfect job and more about making the most of the job we do have. Maybe we can follow Leslie’s lead and ponder from time to time the ways we can make our work more meaningful and impactful–doing so may ultimately help us feel some of that Knope gusto.
Ryan Duffy is assistant professor of psychology at the University of Florida and coauthor of the book Make Your Job a Calling (Templeton Press, October 2012). He and his coauthor Bryan Dik have just started a blog at Psychology Today titled Vocation, Vocation, Vocation.
New Podcast
Check out the new podcast that Ryan and Bryan did for their issues of the Journal for Career Assessment!
“Is work good or bad?”
That is the lead-off question in a recent op-ed that appeared in the New York Times. With “jobs” being one of the more salient buzzwords of the 2012 election season, it’s a question worth considering. If you have the time, head on over and see what philosopher Gary Gutting has to say about it all.
Purpose in the Octagon
Check out this great feature that ESPN did on one of the best mixed martial arts gyms in the world: the Jackson/Winkeljohn Gym in Albuquerque, NM (which is, according to the article, “less a battlefield than an ashram”). The article and the accompanying video don’t address calling by name, but the themes certainly are present. To understand purpose as it relates to . . . well . . . punching someone in the face, you really only need to see what his fighters (some of the best in the world) say about trainer Greg Jackson:
I owe everything to you. – Nate Marquardt
You Are Creating a Whole New Me. – Cub Swanson
I am honored to be part of your team – Georges St-Pierre
The whole article is worth a few minutes of your day, even if you aren’t a fight fan. Check it out. Isn’t it great when calling-esque stories show up in unexpected places?
Special Calling Issue of the Journal of Career Assessment
We just guest edited a special edition of the Journal of Career Assessment. The focus of this issue: calling, of course! You can check out the issue here. Lots of great research in this one.
Checklist for Evaluating Online Assessment Systems
In the old days, you had to meet with a counselor to take career assessments. The instruments came with a booklet, a score sheet, and a number 2 pencil. After taking the assessments, the counselor had to mail it off for scoring before you could see your scores and experience and interpretation of the results.
These paper-and-pencil instruments are still around, but career assessments are now ubiquitous on the web. Many of them are packaged with other assessments in career assessment systems, for which you are usually required to pay a fee to access. There are many of these in the marketplace—close to a hundred by our latest count. How can you tell the good ones from the bad ones? The following questions can help you evaluate a career assessment system you might be considering. Before making the leap, logging on, and entering your credit card number, think them through and make sure you have the info you need to make the right call.
- Does the assessment system use instruments with strong support for reliability and validity? Reliability and validity evidence is the quality control criteria that psychologists and other professionals use to evaluate assessment instruments. For more information, check out our information for psychologists. A good assessment system will provide at least a summary of reliability and validity evidence for its scores, probably with a link to more detailed, highly technical information. If it does not, contact the company and ask for this information. If they are reluctant to provide it or say that it is not available, you cannot be sure that the information you’ll get from the assessments is accurate. Move on to another option.
- Has the assessment system as a whole been empirically evaluated for effectiveness? Only a few assessment systems have been tested in experiments designed to investigate the effect they have on the career decision-making confidence (among other outcomes) of users. If the system you are evaluating has been tested, what were the results? If it has not been tested, how do you know it will be helpful?
- Does the assessment system provide an opportunity for you to interact with a professional who is trained to interpret your scores? Most career assessment systems are self-directed, but some provide access to human interaction, even by phone or skype, or by a counselor in your area who can work with you to interpret the results provided by the system. You can benefit from navigating an assessment system on your own, but research shows that interaction with a counselor significantly improves the effectiveness of computer-based career assessment systems.
- Does the assessment system provide linkages of your assessment data to job titles that are predicted to be a good fit for your profile? Many systems provide more than just scores on your attributes—they use those scores to recommend good-fitting jobs. If a system offers this, from what source does it draw its information about jobs? Some use the O*NET, the U.S. Department of Labor’s occupational information database. Others have their own proprietary database. A good assessment system should disclose whatever source they use, so you can evaluate the quality of the information.
- Can the assessment system link you directly to potential employers? Some systems provide job postings. Not many link you to actual positions on the basis of your profile of scores, but this kind of “e-Harmony for jobs” function is probably the way of the future.
- Is the assessment system cost-effective? Most systems are affordable, but the fees do vary. How much does the system you are evaluating cost, and how well does that cost reflect the value you receive? Will you derive proportionally greater benefit from assessment systems that are more expensive than others?
- What promises does the assessment system make? Be wary of any claims that success is guaranteed, or that even hint that your results will reveal the career path you “should” pursue. Career decision-making is a complex endeavor, and although assessment systems can play an important role in your process by giving you helpful information, that information is just one piece of the puzzle. It can inform your choices, but not make them for you. Success is never guaranteed, and no system can tell you what you “should” do with your life.
Quality Control Criteria for Assessments
For psychologists, reliability and validity serve as quality control criteria.
Reliability
The question of reliability is a critical one to ask of all measures. Formally, reliability is “the degree to which scores are free from unsystematic error.” If scores on a test are free from this, they’ll be consistent across related measurement. So, an easy way to think about reliability is the word “consistency.”
Note the word “unsystematic” in the definition. Scores on a measure may contain error but still be reliable. For example, if a thermometer is always 5 degrees low, it will be “reliable”–meaning consistent–but it’s inaccurate. Or, consider the case in which one professor is a harsh grader and one is a lenient grader. Each of them may be reliable, but one assigns grades that are systematically too low, and another gives grades that are systematically too high.
Types of Reliability
There are four primary types of reliability that psychologists use to evaluate an assessment’s scores. They are as follows:
1. Test-retest reliability. This is a measure of stability, and it asks the question: How stable are scores on the measure or test over time? Do you get the same results (or at least very close to the same) on two separate occasions? This is computed numerically using a correlation coefficient. The thing to keep in mind with test-retest reliability is that it only makes sense if the trait being measured is supposed to be stable. Most variables that career counselors want to assess (e.g., interests, values, personality, ability) are quite stable on average, once people reach early adulthood. (Otherwise it would make little sense to measure these things and use them to inform career decisions that may have long-term implications.)
2. Alternative forms reliability. This applies only to instruments for which there are two or more separate forms that are intended to be equivalent. For example, the SAT and ACT each have multiple forms. This is a measure of equivalence, and it asks this question: Do the two different versions of this measure give the same results? Are the two versions essentially the same? To test this, a test developer would administer the two versions of your assessment to the same people, and then calculate the correlation between these two sets of scores.
3. Internal Consistency Reliability. This is a measure of consistency within the test. How consistent are scores for the items on this test? Do all the items fit together? Do they all measure the same thing? There are two main types of internal reliability: split half and coefficent alpha.
- Split-half. To find the split half internal consistency reliability, you’d start by administering your measure’s items to a group of people. After everyone has taken it, you’d randomly separate the pool of items into two halves. You’d treat these halves as if they were each a separate measure of the construct. Then you’d calculate the correlation between the scores you obtain from each half of the test.
- Coefficient alpha. One problem with split-half reliability is that you may get a different measure of reliability for each way you split the measure in half. You could simply separate the even-numbered items and the odd-numbered items, but you could also take the first half and the second half, and your correlation may end up being different. Coefficient alpha is a type of reliability coefficient that test developers can calculate which is the average of all possible split-half reliability coefficients. Conceptually, you can think of it as essentially the same as splitting the measure in half many different times—as many different ways as is possible—and calculating the split-half reliability coefficient for each pair of halves, then calculating the average of all those split-half reliability coefficients. In practice there is a formula that makes calculating this easy. Coefficient alpha can also be thought of as an index of the degree to which each item contributes to the total score.
4. Inter-rater reliability. Sometimes you measure things not with a paper and pencil measure that the participant completes, but by having other observers rate the participant’s behavior, such as in a behavioral assessment. Reliability is still important in this case, but it looks a little different. Now the question is whether the two (or more) raters agree with each other.
To establish evidence for inter-rater reliability, you would have two raters rate the same participants. Then you would calculate the relationship between the two raters. Once you have established evidence for inter-rater reliability, you can say confidently that the rating that occurred was consistent.
Validity
Sometimes validity is referred to as addressing the question “does the test measure what it is designed to measure?” A broader and more appropriate way to think about validity is referring to it as addressing the question “does the test accomplish its intended purpose?” or “does the test meet the claims made for it?”
Types of Validity
There are three primary types of validity that psychologists use to evaluate an assessment’s scores. (The list below has four types, but the first doesn’t really count.) They are as follows:
1. Face validity. This really isn’t validity in a scientific sense. If a measurement instrument has face validity, it just means it looks like it measures what it’s supposed to measure. If you have a measure of interests and the items look like they are tapping into a person’s interests, you have face validity. This can be good if it builds rapport with the test-taker, but it doesn’t have any real scientific value. All things being equal, it’s good if you have it, but it doesn’t mean that the measure is accomplishing its intended purpose.
2. Content validity. Content validity refers to how well an assessment covers all relevant aspects of the domain it is supposed to measure. Conceptually, you could ensure content validity if you could write items to cover absolutely every detail about your construct. For example, if a test developer wanted to measure a particular style of leadership, that developer could write every single item she could possibly think of that would be relevant to that leadership style. Then she could take a random sample of those items and include them in the scale. Unfortunately, this is not only impractical, it is impossible.
Often, content validity is assessed by expert judgment. You could assess the content validity of a measure by having an expert or experts examine the items and determine whether the items are a good representation of the entire universe of items. If the leadership style measure described above exam has low content validity, it would probably include a lot of items that aren’t relevant to that style, and it would probably leave out items that are very relevant to that style.
3. Criterion-related validity. This refers to how well an assessment correlates with performance or whatever other criterion you’re interested in. It answers the question of “do scores on the measure allow us to infer information about performance on some criterion?” There are two types of criterion validity: concurrent and predictive.
- Concurrent validity. This refers to how well the test scores correlate with some criterion, when both measures are taken at the same time. For an interest inventory, for example, a good question is whether people who are, say, engineers score high on a scale designed to measure interest in engineering.
- Predictive validity. This refers to how well the test scores correlate with future criteria. For example, what percentage of people will end up in a career field down the road that corresponds to high scores on scales designed to measure a person’s interest in that field?
4. Construct validity. Construct validity most directly addresses the question of “does the test measure what it’s designed to measure?” It refers to how well the test assesses the underlying construct that is theorized. To demonstrate evidence of construct validity, a test developer would show that scores on her measure would have a strong relationship with certain variables (the ones that are very similar to what is being measured) and a weak relationship with other variables (those that are conceptualized as being dissimilar to the construct being measured). There are two types: convergent and discriminant.
- Convergent validity. Convergent validity is the extent to which scores on a measure are related to scores on measures of the same or similar constructs. For example, let’s say your personality test has an extraversion scale. You might expect that the more extraverted a person is, the more likely she or he is to have high levels of sociability. If there is a strong positive relationship between scores on your extraversion measure and the scores on measures of sociability, then your scale’s scores have evidence of convergent validity.
- Discriminant validity. Support for discriminant validity is demonstrated by showing that an assessment does not measure something it is not intended to measure. For example, if you have an extraversion scale in your measure, you might consider also administering a measure of emotional stability. We know that extraversion and emotional stability are two different things. If you asked people to take your scale and a measure of emotional stability and you find a small correlation between scores on these two scales, you have shown that your scale measures something other than emotional stability. Note that you haven’t shown what itdoes measure, you have just shown what it does not measure.
NOTE: An instrument’s scores can be reliable but not valid. However, if an instrument’s scores are not reliable, there is no way that they can be valid.
Why does this matter? If you want to know how “good” a career assessment is, ask this question: “What is the evidence of reliability and validity?” A counselor should be able to answer this question, and an online assessment portal should provide some basic information (and links to more detailed information) to show that its scores provide good information. If no information about a particular test can be found, we’d encourage you to avoid that assessment instrument, or at the very least to view scores generated by that assessment in only the most tentative way.
Do your Career Goals Fit with Your Life Goals?
As we note in Chapter 4 of Make Your Job a Calling, most people agree that experiencing a positive sense of meaning—defined as “the sense made of, and significance felt regarding,the nature of one’s being and existence”[1]—is fundamental to living “the good life.” Yet how many take serious steps toward living meaningfully at work? One way to think through this question is to evaluate how well your career goals fit within the context of your life goals. Try this: for starters, think about your life as a whole. What, ultimately, is most important to you? How would you describe your life’s purpose? With answers to these questions in mind, list at least five life goals you are currently pursuing. (We recommend you write these down, either here, in a journal, or on a separate sheet.)
1.
2.
3.
4.
5.
Next, think carefully about your career for a moment—your current job situation, the kind of work you most want to do, and the steps you need to take to bridge the gap between these, if there is one. How close or far away are you from where you want to be? What role do you want your career to play within the broader context of your life? With your answers to these questions in mind, list at least five career goals you are currently pursuing:
1.
2.
3.
4.
5.
Now look closely at the goals you listed above for your life and your career. To what extent are your career goals in line with your life goals? Are you happy with your answer to this question? If not, what needs to change?
[1] M. F. Steger, Patricia Frazier, Shigehiro Oishi, and Matthew Kaler, “The Meaning in Life Questionnaire: Assessing the Presence of and Search for Meaning in Life,” Journal of Counseling Psychology 53.1 [2006]: 80–93, 81.