The least favorite part of my job is grading students, so this semester I decided to outsource some of it.

In my Social Media for Reporters class at UNC, 20 percent of each student’s grade will be based on the number of points that his or her Klout score goes up over the course of the semester. But the best thing about doing so is probably not that it’s easy, but that it is flawed.

Boiling a semester’s worth of effort and accomplishment down into a single number has always seemed to me to have a certain false sense of precision to it. More than once I’ve looked down at the end of the semester and wondered to myself how one student or another ended up with a grade that was so much worse — or better — than I would have handed out just based on gut instinct.

That’s the problem that many folks seem to have with Klout and other similar social media metrics tools. Boiling continuous interaction across a variety of social networks down into a single number opens lots of room for argument — sort of like debating whether the impact of Angel outfielder Mike Trout’s 129 runs and 49 stolen bases is more deserving of the MVP award than Detroit third-baseman Miguel Cabrera’s .330 batting average, 40 home runs and 139 RBI.

Boiling multiple data points from disparate contexts down to one, final judgment is often overly simplistic. But we do it.

Room for improvement

The question for me is whether we’re doing it the best way possible, and how we might do it better in the future.

Klout doesn’t release the algorithm it uses for calculating scores, and it doesn’t disclose the distribution of its scores. The most precise piece of data they share is that the average Klout score is 40.

How is that possibly fair to students who are struggling to raise this arbitrary number that’s contrived inside a black box? It’s fair because it transforms the class from a workshop on button-pushing to an exercise in hypothesis testing, strategy and critical thinking. Students — who often approach grades with calculating economy of effort — don’t know what they have to do to boost their Klout scores, so they are forced to design simple experiments, isolate variables, and generalize their findings.

We aren’t totally shooting blind. Here’s what we know about how Klout creates its scores:

• There are more than 400 variables in its calculation.
• New variables were added and scores were redistributed in August, just before the semester started.
• It only counts networks it can see — so either public posts, or private posts that you’ve connected to Klout.
• Your Klout score is a reflection of your activity within the last 90 days.
• New Klout scores are released each morning. Older data is decayed in favor of newer data. (But Klout doesn’t say at what rate data is decayed.)
• The score factors how much content you create compared to how much engagement you are receiving
• Klout says it attempts to measure engagement equally across all the networks it monitors, so that it doesn’t favor activity on one network over another.
• On Twitter, Klout looks at retweets and mentions. And it is better to be re-tweeted or liked by people who do those things rarely than by people who do those things often.
• On Foursquare, it measures Todo’s and Tips.
• Since late last month, you get extra Klout credit if people search for your Wikipedia page on Bing. And if you appear as an expert in the “People Who Know” section of Bing’s sidebar.
• Some networks — YouTube, Instagram, Tumblr, Blogger, WordPress.com, Last.fm and Flickr accounts — can be connected to Klout, but don’t affect your score.
• There is no reward for just adding networks that you do not participate in. Neither is there punishment.
• Adding a new account is reflected in your Klout score within 24-48 hours.

What’s missing

But there’s also a lot we don’t know. Perhaps the most important piece of missing transparency is the “difficulty rating” that students should receive for each additional point increase in their Klout score.

Two students who had almost no social media activity when they started the semester registered initial Klout scores of 12 and 18 within the first week, but have had little movement since. But the two students who started at 55 have also seen little growth.

The most rapid growth in Klout scores during the first four weeks I’ve been tracking them has come from the students who had scores in the 30s and 40s. One student jumped from 33 to 52 and another from 42 to 58. But another moved only from 43 to 46.
I wanted to measure only growth that happens during the semester, so as not to punish students who started out with little or no social media experience. But what I may have ended up with is a system that punishes students who began with extensive social media engagement.

In an effort to prevent sandbagging, I’m distributing the Klout portions of their grade on a curve relative to the class. But that means all of my students could end up with a number that’s in the top 10 percent of all Klout scores and still get an average grade. The only safety valve for that is my promise that I will give an “A” Klout grade to any of my students who end the semester with a score higher than mine, regardless of where they started. Right now, that bar is set at 62 — third among my UNC colleagues, below Chris Roush and Paul Jones.

In the end, I’ll add my own judgment about my students’ effort and ability to use social media as reporters. I’ll consider qualitative measures such as how trusted they are on their beat, whether they used it to give voice to the voiceless, hold powerful people accountable, shine light in dark places, explain our increasingly complex and interconnected world, and get the right information to the right people at the right time.

A high Klout score is something I’d expect from a solidly average student. A B student will be able to pick apart and critique Klout’s system. And an A student? Someone who will one day build a better Klout.

Ryan Thornburg researches and teaches online news writing, editing, producing and reporting as an assistant professor in the School of Journalism and Mass Communication at the University of North Carolina at Chapel Hill. He has helped news organizations on four continents develop digital editorial products and use new media to hold powerful people accountable, shine light in dark places and explain a complex world. Previously, Thornburg was managing editor of USNews.com, managing editor for Congressional Quarterly’s website and national/international editor for washingtonpost.com. He has a master’s degree from George Washington University’s Graduate School of Political Management and a bachelor’s from the University of North Carolina at Chapel Hill.

Photo by Tyler Ingram on Flickr and used here with Creative Commons license.