Objective distance is overrated. Try getting a little more personal.
Editor’s note: This article is part of a new MIT SMR series about people analytics.
I was told recently about a people analytics group that prided itself on its independence. They did not mingle with operations, they never made site visits, and they did not explain their models. They aspired to be the alternative view, uncorrupted by “the way things have always been done.” They reasoned that it’s easier to see things clearly when not down in the muck.
If you spend any time studying decision-making, you know this kind of independence is highly prized. Sources get more weight if they are uncorrelated with other sources. But if you spend time in organizations, you know this approach has a downside, too. In practice, decisions rarely come down to the optimal weights prescribed in a model. For better or worse, in the messy real world of ambiguous evidence and contentious objectives, organizational decisions — especially those about the people you’re hiring, developing, managing, and trying to retain — usually hinge on relationships and trust.
To have impact, analysts must learn to traffic in that currency. And to accomplish that, they must wade into the muck. In the end, perfect independence is not a virtue but a vice.
Sig Mejdal, one of the most successful analysts in baseball, understands this. Mejdal left a career as an aeronautical engineer to work for the St. Louis Cardinals in 2005, the dawn of the Moneyball era. Helping with the team’s player draft, he was there for a successful seven-year run, including two World Series championships. After moving to the Houston Astros with new general manager Jeff Luhnow in 2012, he helped rebuild that long-suffering franchise, culminating in yet another World Series in 2017.
How does Mejdal spend his time? In the summer of 2017 he was a coach in Troy, New York, deep in the Astros minor league system. This 51-year-old was wearing a uniform, coaching first base, warming up players, and eating with the team after games. The top analyst in the organization spent his summer evenings riding the team bus between small towns in upstate New York!
The Astros are considered a model for blending analytics with traditional expertise. They took this unusual approach with Mejdal because of their commitment to embedding analytics in the organizational DNA. They wanted to break down the barriers that typically exist between those who think in regressions and those who can hit 95-mile-per-hour fastballs. They wanted to create opportunities for players and coaches to ask “the analyst” questions and for the analyst to ask questions of them. It worked so well in 2017 that Mejdal did a second tour the next summer.
Analysts in all kinds of organizations can learn from a baseball executive riding a minor league bus. I’ve worked as an analyst in academia and industry for almost 20 years, slowly coming to appreciate this approach. As I’ve studied the matter and learned from others, a few practices have stood out. We can see them in Mejdal’s example, and we’ll look at them in more detail below.
Some of these tactics are useful in any job that involves shaping decisions in an organization, while others are tailored to the unique challenges in analytics. They are all geared toward individual analysts, though — not their managers or organizations — because perfect conditions for influence rarely exist, and we cannot depend on others to create them for us. An analyst wanting to make an impact must play an active role.
It’s a shame that spreadsheets and cocktail parties don’t mix better. For many analysts the idea of networking is not only uncomfortable but dubious. Some feel it’s possible to be good at models or good with people, but not both, as if caring about relationships undermines one’s technical work. This mindset is a real handicap given how much analytic work depends on other people. As was said of Jim Wright after he was deposed as speaker of the U.S. House of Representatives in the 1980s, “Being a loner eliminates a safety net of both information and goodwill.”
More than four decades of empirical research suggests that people derive professional benefits, including both formal and informal power in organizations, from the size and structure of their social network. For the past 10 years, I’ve looked more closely at this relationship by assessing the influence behavior of thousands of executives and students. Turns out, logical reasoning (a hallmark of analysts) and network building are two of the influence tactics used together least often. Many analysts are conditioned to believe that you shouldn’t have to get to know people if you are sufficiently good at your job. I am regularly astounded by how eye-opening they find networking research, even in 2019.
Because it’s rare, the ability to excel at both logical reasoning and relationship building is especially valuable. One trick is to get to know people before you need them. Waiting to build a relationship until it serves a purpose is ineffective and disingenuous. The alternative, investing in relationships continuously, takes discipline. Today’s urgent needs tend to crowd out longer-term investments, but cultivating relationships removes the hidden agendas that leave many analysts feeling queasy about networking.
Go to the Field
Analysts, almost by definition, traffic in secondhand information. Often secluded in an office remote from the front lines, they are subject to the perception of being out of touch and uninvested. Decision makers, on the other hand, are more likely to be on the front lines, or at least to have come from the front lines. Analysts must find ways to bridge that gap.
Sergio Vieira de Mello was one of the most successful diplomats of his generation. He spent his entire career with the United Nations, working in trouble spots around the globe — Lebanon, Cambodia, Bosnia, and Iraq, among others — as well as at the U.N. headquarters in New York. Spending time in the field was one of his negotiation strategies. “Sergio preferred to be in the field, to find out what people needed,” a longtime associate observed. “But he also knew that being in the field gave him more credibility in political discussions when he returned to capitals.”
Going to the “field” is about learning, first and foremost. But it is also about building trust. We’ve known since Aristotle that ethos is a vital part of rhetoric. We might like to think a model, an idea, or a data set should stand on its own, but that is simply not how people are persuaded. They care about the persuader. They need to believe in the person selling the idea.
When the decision makers are in the field, or from the field, anyone without that experience is suspect. And different. This poses an ethos problem for most analysts and is an important fight to take up. You need evidence of personal familiarity with the front lines — stories, contacts, concrete examples. Find ways to accumulate that evidence. You’ll also learn something along the way.
Avoid Black Boxes
People have a hard time believing what they don’t understand — especially when the information contradicts received wisdom. This is a real challenge if you are working with statistical tools invented only in recent years and communicating with colleagues who have no statistical training at all. But this is your problem, not theirs.
Prasad Setty has run people analytics at Google for more than 10 years. Early in his tenure he tried to persuade engineers to use a model his team built for deciding whom to promote. Setty’s team was smart and well-intentioned, and their model performed well, saving weeks of employee time. Trouble was the model was too complex to explain. The engineers rejected the innovation as too much of a “black box.”
You need to explain the machinations underlying your advice in the language of the decision maker. This ability can make or break an analyst.
The black box problem is only growing with the rise of machine learning, a broad category of statistical methodologies whose power is matched only by their opacity. One of the most important fronts in statistics is making machine learning more transparent. Wharton professor Hamsa Bastani and colleagues, for example, have developed techniques for doing so. They find that as they make a statistical algorithm more transparent — by, say, approximating the decision tree the model constructs — users are better able to spot problems and improve the model.
I’ve observed that this insight is true for statistics more generally. When users fully understand what a model is doing, they are more likely to believe the model and, ideally, help improve it.
Give Away Some Control
Recently my colleagues and I have studied algorithm aversion, which is the reluctance to rely on imperfect algorithms when making decisions, even when those algorithms outperform intuitive judgment. In trying to overcome this aversion, we discovered that people can be encouraged to rely on algorithms more heavily if they can modify them, even if only slightly. By giving up just a little control, an analyst can have much greater impact.
A group of us applied this insight in a recruitment setting when helping reengineer graduate admissions at Wharton. We built an optimizer that recommended a portfolio of students for each admitted class. The goal was to give the admissions team as much of their objective as possible (identifying the candidates with the best expected classroom performance, the greatest prospects for a successful career, and so on), subject to the constraints the team faced, like class size and geographic preferences. And to do so instantaneously via a spreadsheet rather than through a weeklong committee meeting.
This was a big change for the group, and many staffers were understandably reluctant to adopt the model. To ease their concerns, we gave them veto power over every decision, and we left 10% of the class to be filled outside the model, using whatever method they desired. We’ve learned how to improve the model by watching what they accept and what they change. And with each passing year, the committee revises a smaller share of the model’s recommendations.
It is ironic that analytics, with such potential for counterbalancing the overconfidence rampant in intuitive judgment, can itself foster overconfidence. But that’s one of the biggest problems we see in this field. Whether because of “overfitting” (creating overly complex models to accommodate idiosyncratic data), small samples, or nonstationary environments, analysts often believe their models are better than they truly are.
Overconfidence is one of the most robust findings in psychology — and a significant obstacle to rational decision-making. In research with Richard Thaler, I have seen the havoc it can wreak in talent identification.
Personal humility is a practical antidote. As the economist Steven Levitt, coauthor of the Freakonomics series and, objectively, an expert in a number of fields, notes, “You can’t learn until you admit you don’t know anything. There is so little I know.”
The best analysts recognize the limits of their models. This recognition is typically earned the hard way: through experience. That means reps are important — make predictions, get feedback, improve your model, repeat. The (almost) inevitable consequence is greater humility.
Legendary executive coach Bill Campbell once noted approvingly of a maturing leader, “She’s gone from ‘I know the answer’ to ‘I have to sell the answer.’ ” New analysts spend almost all their time getting the answer. Experienced analysts, especially successful ones, spend at least as much time figuring out how to sell their answers. And the tactics described here can help with that.
Must we really go to all this trouble? This sounds hard, and kind of inefficient. Isn’t it enough just to be right? These are understandable questions. After all, analysts are often trained in the most sophisticated methods and working with the best available data. But making a compelling pitch is a necessary part of the work. Besides, few people, and fewer analysts, can simply impose their will on an organization. And it’s a bad idea anyway. Even algorithms require care and feeding. When they’re forced on others, they tend to find their way to the organizational attic, safe but rather underused.
So step away from your models, analysts! Go to the field, get in the muck. Don’t be afraid to “become one of them.” You will be changed, and your work will be better for it. It will also have greater impact.
Powered by WPeMatico