People analytics

PDD: Episode 7 - People Analytics
===

Eddie Flaisler: [00:00:00] So today's episode is incredibly meaningful to us. We are not answering anyone's question, but sharing some perspectives about a topic that is probably on the mind of most engineering leaders at this time of year, and that's assessing individual performance.

Eddie Flaisler: It's the season of mid-year reviews, and I feel like it's particularly tough this year as companies start to emerge from one of the biggest financial shocks of the past decades. And now they have to understand what meeting expectations will mean going forward.

Morgan VanDerLeest: Yeah, it's definitely a tough time for folks and, you know, we can talk software all day, but let's remember software really is about people. And at the end of the day, individual performance, particularly the perception of performance is one of those biggest indicators of whether someone is going to be continuing their career at a particular company.

Morgan VanDerLeest: There's a lot of layers to this. I love it.

Eddie Flaisler: So many layers. Cue the intro. Let's do this.

Morgan VanDerLeest: I am Morgan.

Eddie Flaisler: And I am Eddie.

Morgan VanDerLeest: Eddie was my boss.

Eddie Flaisler: Yes.

Morgan VanDerLeest: And this is PDD: People Driven [00:01:00] Development.

Morgan VanDerLeest: How do we even start talking about individual performance assessment? I know you keyed in on that word individual, but is it really? , Is it individual? Cause it feels like we always compare. We calibrate and benchmark against others when we're making a judgment call about a particular person.

Eddie Flaisler: You know, I don't actually think we should start with how to assess employees at all. We should probably begin with an acknowledgement of what we as people, regardless of our role or experience, are qualified to say about others.

Morgan VanDerLeest: Fair point. So to set some expectations for listeners: should we get into a philosophical discussion about what performance is?

Eddie Flaisler: No, no. I think that's out of the question. For this to bring value, we have to be practical, but you tell me, Morgan, you've been managing for so many years. What's your sentiment analysis for all the performance conversation you've had with people?

Morgan VanDerLeest: You know, performance conversations are this thing they're expected everywhere. They're a key component of management and they can be incredibly stressful on both sides for the report and for the manager. For an individual, that [00:02:00] conversation can be the difference between being able to focus on your work and your impact over the next six or 12 months, or focusing on keeping your job until that next review cycle. And then on a manager's end, trying to do your best to distill an individual's performance over whatever that period of time is three, six, 12 months into an understandable assessment, and to do so over your entire team. And this is usually happening while you're also managing a full workload for the both.

Morgan VanDerLeest: It's a lot and it's tough to do well.

Eddie Flaisler: Yeah, I totally think so, too. And that's why I believe we should start by acknowledging the limitations of evaluating performance because it helps us do it better and might also give some language for explaining decision making to those impacted by our assessments.

Eddie Flaisler: I always feel like that's the point, isn't it? It's not about good or bad. It's about describing how you reach a certain conclusion and making sure that, we on our end do our duty, do our due diligence in transparency. in data oriented decisions and setting up our people for success.

Morgan VanDerLeest: I couldn't agree more there. I [00:03:00] think more often than not, it's how do we go about telling the story of this person's performance and in relation to everything else going on. So let's take a step back. When we approach performance evaluation. What do you think we need to know about ourselves and the situation?

Eddie Flaisler: Yeah, so I will go with what's probably the least debatable facts first, and these are the ones the science of statistics tells us. The first thing is that performance undoubtedly entails components of natural tendency and effective work, but it does have a built in element of noise in the data, which more often than not actually is overwhelming compared to the other add ins in the equation, and that element I'm talking about is luck.

Eddie Flaisler: Luck sounds like this magical concept that has no business in a corporate setting, but it is simply defined as anything in the environment that cannot be fully controlled. And if you think about it, a lot of stuff falls into that category. It isn't just about getting a cool project to work on so you can shine.

Eddie Flaisler: It also can be about whether or [00:04:00] not a random customer action managed to cause an outage that made you look like you're not good about quality. Right? And while others code, which is far less tested, didn't have that happen. It can also be whether you landed in a scrum team where teammates who are such good friends, who can't stop chatting with each other, end up making decisions on chat one on one without realizing they're actually not sharing it with you, and you're just never clear on what assumptions to make when you implement some functionality.

Eddie Flaisler: You know on the flip side, you can also land with a manager whose hands on involvement in your work makes you feel confident, right? While others will interpret that as micromanagement. You can be assigned to a product that is brand new, and that doesn't suffer from any of the maladies of legacy systems.

Eddie Flaisler: So it's much easier for you to do well. All these things impact our performance or, as you said, our perception performance so much. Not to mention the biggest luck of all, a manager whose articulation of their team's achievement is [00:05:00] naturally passionate and optimistic, while another one makes their teammates sound like they're doing nothing, simply because they themselves approach the world with more cynicism and apathy.

Eddie Flaisler: Performance story is so important because it's people. You can't just look at numbers. It is about the story you want to tell, whether or not you're trying to be super accurate and super data oriented.

Morgan VanDerLeest: Such a good point. I think luck is just really under attributed with the opportunities that people have. There's the adage you make your own luck . Yes, that's true to an extent on the individual basis, someone being able to kind of bring that to the table to develop their own opportunities.

Morgan VanDerLeest: But as a manager of people, you have to recognize the role that luck plays within the work of your team and the business and do your best to help manage it out. Making sure that luck is not this thing that only helps some people and not others, or in fact, could even hurt others. I got to balance that out, figure out how best to not let it take away from people within your [00:06:00] team.

Morgan VanDerLeest: And it's funny, as much as I chafe at using data incorrectly, that's luck. All luck really is, is statistics anyway. There is some statistical degree at which this thing can happen. How do I best make sure that it is either spread out or averaged or has a lower impact negatively or even positively in some senses. Because if somebody is influenced too positively by a thing that can take away from everyone else too.

Eddie Flaisler: One hundred precent..

Morgan VanDerLeest: You may not want to do that. You may want to allow this incredible opportunity to happen. But if it's only happening to one person and everyone else gets left behind, that may not be a good thing for the team or even for the business. So how do you look at those things? It's tough.

Eddie Flaisler: I think this is exactly right because you mentioned managed luck and that's kind of an interesting way of putting it because life happens, work happens, right? Situation, business constraints. One thing that hopefully we've been clear about so far in our podcast is that even [00:07:00] though we're approaching the work from a people oriented perspective, this is a business and this is a software and ultimately things happen that are not by nature necessarily people first.

Eddie Flaisler: So you need to find the balance between what's right for the technology for the business and what's right for the people. And to that end, It's true that you cannot control these events, or you cannot ignore them, and you need to manage them, but you can control, you can definitely control, how you approach assessing people based on your overview of these events.

Eddie Flaisler: So, you mentioned statistics, let's start with that. I think sample size is something really important to remember when people discuss performance of individuals in calibrations compared to other team members.

Eddie Flaisler: You know, I am incredibly grateful every time I see a real effort put into the mechanics of performance calibration, whether it is a nine bins method, or when you go by level, Or even when you orchestrate this massive cross organization calibration because you're in [00:08:00] fact aware that a larger sample size can give you a better signal, but ultimately when it really comes down to comparable situations and expectations that are actually useful in calibration, the sample size you're comparing against is fairly small. Or in plain terms, there aren't that many people you can compare to in terms of scope, in terms of what success looks like for them, and so on.

Eddie Flaisler: And it's an issue. It's an issue because when you have a small sample size, the variance is insane. So what may appear as an incredible success or terrible failure when compared to others might not actually be that far off. If you had more data points.

Morgan VanDerLeest: That's the thing about statistics, right? Without that large enough sample size, you really can't compare and find proper trends, but we do it anyway. We need to do a better job of this at an individual, team, and organization level. Knowing that we don't have proper sample sizes. And that's why I think it's so important to set those expectations and communicate [00:09:00] about expectations with folks. You know, you may not be able to say this person is actually average for a mid level engineer at our company or met expectations for a mid level engineer at our company.

Morgan VanDerLeest: But you can say this person completed the expectations that we set at a previous review or this person exceeded those expectations because X, Y, Z.

Eddie Flaisler: This is so true, and it again falls into the as opposed to trying to define or bucket a person into something which which you need to again. I want to acknowledge the practicality of needing to have these mechanisms in place, but we can kind of dive into this a little bit later, the approach of here's what the person was supposed to achieve.

Eddie Flaisler: Here's what it was achieved is much more feasible and much more fair than taking it as: are they good? Are they bad? Are they mediocre? Are they excellent? I feel that's something we're not necessarily equipped to say about people.

Morgan VanDerLeest: Relating to expectations, if your expectation is that somebody will act in accordance with the [00:10:00] skills matrix,that's a pretty sad expectation. You need to have more explicit things in there to actually be beneficial to someone. Because if a rubric or a skills matrix is written for everybody, just match this, just do this. No one can ever translate that well. Everyone's always asking for examples of things. So as a manager going into that, what does that mean for you at this point in time for this company over the next six months, what does it look like for you to meet or exceed expectations given what we expect and the projects that we have ahead.

Eddie Flaisler: And if you think about it, it's very similar to the sample size point in the sense that what you're basically talking about is granularity, right? The more data points, if let's say sample size is our x axis, the more data points you have on the y axis, the more information you have.

Morgan VanDerLeest: So let's change course a little bit and talk solutions. Besides, we promised this wouldn't be everything that's wrong with a performance evaluation.

Eddie Flaisler: Well, totally. Yeah. I want to make this useful to people as opposed to some philosophical discussion, but I want you to let me rant [00:11:00] about one more thing.

Eddie Flaisler: Feedback. So in 2020, I did some consulting with a startup that suffered terrible turnover and the consistent input from exit interviews was that the managers were mean and aggressive.

Eddie Flaisler: So the CEO, who's my friend, because why would anyone else hire me to teach people to be nice, asked that I give his leadership team some training about feedback. Early in my research for that session, I found this really interesting piece in Harvard Business Review called "the feedback fallacy." And it had this key point that caught my attention and that's the fact that the biggest problem with feedback is that humans are unreliable raters of other humans So over the past half a decade psychometricians have shown in study after study that people don't have the objectivity to hold in their heads this stable definition of an abstract quality.

Eddie Flaisler: So like business acumen or assertiveness. Basically accurately evaluate someone on these qualities. We just, we just don't have that ability. So rather when we give [00:12:00] feedback at work, what you're effectively saying is how I would like you to work with me. And I'll say that again. There are two people in this tango. It's about this person working with you, and that's different idiosyncratic relationship than person X working with person Y.

Morgan VanDerLeest: Context, context, context, right? Isn't that why those personality and communication style tests exist? The results of those are not just here is how you work, but also here is how you work with each of these other types of people in these other styles. But they're not perfect. And at the end of the day, that specific versus general feedback is important. Actually something that I like to do with reports is asking, what kind of feedback are you looking for? What are the areas of feedback that you want so that we can get into some of those more specifics and so that they can actually hear feedback that's given to them.

Eddie Flaisler: Yes, absolutely. And it also means that when I talk about Morgan's performance, and when I talk about [00:13:00] Sarah's performance, and I deliberately chose a name typical to a woman, I might be subconsciously expecting different things of them, even when I try to be the most professional objective that I can.

Morgan VanDerLeest: That's one of those things that we don't acknowledge enough. Just within an industry or profession of management and leadership of the biases that we bring to a situation. I know the biases is a charged term and whatnot. It's just true. It's the way that we look at things and what we bring to the table from our own perspectives, which is why the perception of performance is so important. How can we document and share the context of the things that we're thinking about and how we're thinking about them with folks so that we try to take out as much of this, bias and difference of opinion from across folks as we can. Anyway, evaluating performance is difficult. What do we do now?

Eddie Flaisler: Well you tell me: how did you approach these situations in the past?

Morgan VanDerLeest: I'll start by saying I haven't always done a good job, although I'm sure we'll get to some details or a story [00:14:00] or two later, the things that have worked well for me are: communicating regularly, both with my direct reports and also with my managers and stakeholders; setting goals with folks, and that doesn't have to be this specific. Metrics driven, I'm having this specific impact on a thing. I think goals can also be guardrails, guidelines, impact oriented. We figure out together how you're going to achieve this thing; managing expectations well, making sure that, in that communicating and setting of goals, we are aligned on this is what you're expected to do, or this is the outcome that we're expecting to achieve by you working on these things and then representing folks well to stakeholders. Cause folks can do incredibly well, be incredibly impactful.

Morgan VanDerLeest: And if that's not represented well to stakeholders, and not everybody is good at doing that themselves, then I'm doing them a disservice to their career and their performance perception.

Eddie Flaisler: You know, Morgan, I just find this incredible because. I think there is a common thread. You probably noticed it to all the things you're saying. These are [00:15:00] all housekeeping items that are happening well outside the actual calibration or well outside the time window when you're supposed to assign ratings or have conversations.

Eddie Flaisler: So, you know, we got into this part of the of the session where we talk solutions and I think the most important thing to talk about would be: what do we do throughout the year to set us up as managers and especially the person we are rating and to whom we're providing feedback for success during calibration.

Eddie Flaisler: And I can give my own set principles that are important to me, which is pretty much aligned with yours. The first thing I would say is that to the extent the system would let me, I've always tried separating reward and punishment from the performance cycle and also being super clear with my team about what behaviors and outcomes trigger what action on the employers

Eddie Flaisler: end. So if there is a performance issue, I don't wait for perf to address it. And if there is a business [00:16:00] outcome that can be directly and clearly tied to this person going above and beyond their scope, I flag that for recognition, preferably financial, right, as soon as possible. Not only does this alleviate a lot of panic around perf season, but it also allows us to focus the performance conversation on patterns that we observed over the cycle instead of these singular good and bad data points.

Eddie Flaisler: I will say, I want to call it out because I think it's really important: we're all humans. We all work in the constraints of a system. So, when I talk about flagging for compensation, I can give you my word that throughout my career I tried.

Eddie Flaisler: I was not always successful. When I talk about handling a performance issue, immediately when it occurs, I can again give you my word that I tried. I was not always successful. And by the way, the reason for my failure was not necessarily extrinsic. It was also intrinsic. I was busy. I had other things going on. Things happen. But that doesn't mean [00:17:00] we shouldn't keep trying.

Morgan VanDerLeest: That's all you can really do, right? Keep trying. Keep trying to do better. I always call back to a p revious engineer I worked with, I loved his thinking of we just want to make things good. We're in the pursuit of good. How do we continually make sure that we're doing that? You know, I love that decoupling of reward and punishment from the cycle as well.

Morgan VanDerLeest: Again, not always possible depending on your organization, depending on what extent you could do that. It can be very important. Another thing for performance conversations that I've looked at is how can I make the conversation itself be as transparent and open as possible. Things like sharing feedback in advance, setting understanding and context about how the conversation is going to go.

Morgan VanDerLeest: We're going to look at this. We're going to discuss. We're going to talk about action items, even just having that level and not screen sharing when you get in and somebody seeing their feedback for the first time. That can be an intimidating thing.

Eddie Flaisler: Very much so.

Morgan VanDerLeest: Right? What else do you have?

Eddie Flaisler: The second thing I have, which is again, something I always try to go outside perf is to round robin as much as possible. What we call promotable [00:18:00] work. You know, it's not perfect, but the right intention adds up. In previous episodes, I believe it was one or two ago We discussed managers tendencies to give high value work to the same small subset of employees. And the argument is that others are not fit to take that on.

Eddie Flaisler: I'm not going to repeat everything that's wrong with this argument, but I need to walk into that performance calibration knowing I did everything I could to allow all my team members to shine, given the business constraints. So that's the idea. I feel it's kind of a prerequisite to know I have given, to the extent possible, everyone the right opportunity.

Morgan VanDerLeest: I agree. I think that's something that doesn't happen enough. I do think it's dependent on organization and how folks feel supported within that organization, whether they're willing to see that work distributed or not. I can certainly say there's been organizations where folks want every possible high value thing to come to me because I want to get the credit for the thing, which they may need to do for their own career and sticking with the place.

Morgan VanDerLeest: I've also been in great [00:19:00] organizations where folks are like, "Hey, actually, I'm not even sure if I should raise my hand to do this thing because I want to make sure that other folks get the opportunity to do that." It's really special when you can make that happen." You know, one other point I wanted to call out as a manager going into performance season is helping just prepare your people on what's the process.

Morgan VanDerLeest: What is the performance review system at your organization? What are the tools and cadence and expectations surrounding that? And also helping encourage them in advance of this to keep their own records of things. What are their wins? What are their work logs? Share progress and the things that they're working on.

Morgan VanDerLeest: Help others. These public things, they may show up in different ways for different people and different teams. But then you can kind of collate this and you have this body of work, portfolio, whatnot, in advance going into performance.

Eddie Flaisler: You know, that thing you said about an engineer not raising their hand to take something because they want someone else to get the opportunity, really resonated with me. And the reason it resonated is because that [00:20:00] generosity doesn't happen in a vacuum, nor is it due to some natural tendency, or you just being a good person versus a bad person, or confidence.

Eddie Flaisler: For people, you know, I was never an expert, but it has been my experience that it really is about the environment you're in. For you to feel comfortable to say, I have this opportunity, and I'm going to give it to someone else. You do need to know you will be taken care of, right? You do need to know that this is not going to play against you.

Eddie Flaisler: This is not a one time thing that your performance is viewed and appreciated and acknowledged regardless. All these things is work we do as managers. As culture builders, and I feel that sometimes, to be honest, you know, as an earlier in my career, I found myself in situations where I was giving an engineer a hard time saying you're hoarding all the work.

Eddie Flaisler: You're hoarding all the opportunity. You're hoarding the light. And I did not [00:21:00] acknowledge what my responsibility was there. I did create a zero sum game environment. I did create a situation in which if you're not gonna take it, if you're not gonna fight for what's yours, you're gonna lose it.

Eddie Flaisler: Only later did I realize that I have work to do here as well, to create an environment where people feel comfortable being generous. So I love that you're calling it out.

Morgan VanDerLeest: Appreciate that. Okay. So in this model that we're discussing, perf season started, what now?

Eddie Flaisler: Okay, so we talked about all the things we do offline with recognizing achievements that may not be easily repeatable or addressing specific performance issues. So, with all the positive and negative spikes in the data out of the way, which is what these are, right, we take care of them.

Eddie Flaisler: What I have left to do is to look at the main tool statistic has to smooth out the performance diagrams so they're more palatable and that's consistency over time. Anyone who's ever attended a corporate calibration remembers hearing the people partner articulate these rightful warnings about recency bias and about hindsight [00:22:00] bias.

Eddie Flaisler: Hindsight bias is where I take these very few occurrences and extrapolate this pattern about the person that only exists in my head. Now, these warnings are all very important, and people truly are intentional about doing right by their people. But at least in my experience, the type of anecdotal evidence so often used in these meetings to make our assertions is inherently prone to not following those restrictions, because it's, well, anecdotal.

Morgan VanDerLeest: It's interesting because I love anecdotal evidence of things, and I find that it's very valuable for how you provide context and tell that story, right? You're actually bringing meaning to the data and things that you do have. The problem is when those anecdotes are again, just individual data points of things and not part of a cohesive story. Can't tell you the number of times I've been in the performance calibration meeting. And we're talking about an individual, and there's now some anecdotal story that somebody is telling about someone that no one was aware of prior to this, and it's just now coming up and well, what do we do with this [00:23:00] now? Those are the things that we need to be talking about either before or don't share it because it's not helpful.

Eddie Flaisler: Absolutely. I think it's exactly that, because anecdotes are not a bad thing, right? Especially when you talk about something a little more abstract, like the how of doing something, like how the person interacts with others. Not everything is measurable through a number or deliverable.

Eddie Flaisler: But the question is, what do you use these anecdotes to build, right? And are they the main component of your decision making and articulation of the person's performance? Or they're used to add color to something that is a little more consistent.

Morgan VanDerLeest: So what would you do instead?

Eddie Flaisler: Well, you know, this problem has already been solved. It's just not strictly followed. You set goals for individuals and observe them act on these goals over a longer period of time. It doesn't have to be a static predefined list. But some of the most fair managers I know, including you, Morgan, keep track of what was asked of the developer and how they acted on it, both in terms of what they achieved [00:24:00] and how they achieved it.

Eddie Flaisler: And, you know, they use the trends emerging from that collection and determining how the person did.

Morgan VanDerLeest: So hang on a minute. I think there might be a bunch of problematic things with what you're saying. First, we're trying to get to this apples to apples so we can compare between individuals, but goals can be extremely different in how challenging they are to achieve. I think a great example of that: was working with an engineer once they were relatively new to the team, new to the tech stack, but were more senior and it felt like good opportunity for them to grow. And we had this new project coming up, where we it was a good chance for them to showcase their seniority, but would have been better for another individual on the team who wasn't also handling those other dynamics of new to the team, new to the tech stack. So even though the goal could have been similar across folks, very difficult determining how difficult it was for this individual to achieve and ranking it there.

Morgan VanDerLeest: Another point. I don't think applying this to more senior roles like staff plus is practical because isn't [00:25:00] the whole point with these levels. They're more finding areas of development and independently acting on them rather than being given something to do.

Eddie Flaisler: Okay, so you're asking two very good questions. And let's start first with the practicality and relevance of kind of comparing goals There is no question that we cannot just approach goals by having this performance cycle meeting where we calibrate and just say this person achieved 7 out of 10. That person achieved nine out of 11, and therefore person X is better than person Y.

Eddie Flaisler: That's all meaningless, right? It kind of makes the whole concept of goals pointless. But if you make a point to have open, regular conversations in an org's leadership team about how we assign work, then you reach the calibration meeting able to add color on a person's successes and failures. And those calibrating with you are actually equipped to understand and opine effectively so you [00:26:00] can converge on a somewhat reflective rating. So, concretely, what I'm talking about is regular, preferably monthly exercise between members of an org's leadership team, which reviews not the individuals, but the work assigned to them and gives a sense of the changing expectations from different teammates.

Eddie Flaisler: You've given a perfect example of someone who's on the surface, very senior, and you can expect a lot of. But some context that was missing or not necessarily missing, but not easy to see when you're the only one looking at it was the fact that others in terms of familiarity or experience within the team were probably better set up to address that problem.

Eddie Flaisler: That's a team sport. That conversation of addressing the blind spots and reviewing our work assignment practices with others, it's super healthy. And I've done that in more than one place, and those have been some of the performance ratings I'm most at peace with in my career.

Morgan VanDerLeest: Really appreciate that. And it's interesting because one of the things that I try to [00:27:00] do as a manager is making sure people have those growth opportunities, right? I want to make sure that those are as much as we can round-robinned and well split up. But there's also cases where you're setting someone up essentially to fail by having them do this thing that while yes, it could have this benefit... Luck may say otherwise. And how you can go about mitigating that can make all the difference for that project, that person and how they're interacting with the team and the company moving forward. Even something like pairing someone to work on that project could have been an entirely different scenario.

Eddie Flaisler: That's exactly right.

Morgan VanDerLeest: So, what about the staff plus question?

Eddie Flaisler: Oh man. I feel like that's a whole different can of worms. And no matter how many years I do this, I can't help but get frustrated when I look at a rubric plus engineers. I've personally been in so many situations where adherence to the rubric, albeit a healthy objective- ish way to assess, pushed a hard working, effective engineer [00:28:00] into an underperformance rating simply because they ended up pulling tickets from the backlog that desperately needed someone good to do them and they were just not available to come up with the next world class architecture that is expected of them in the rubric. So, instead of talking about how to adjust the process I described for Staff engineers, I feel like making a few assertions that are true, in my mind, about StaffPlus. The first one is I think an organization needs to be honest with itself about whether promotion to those levels implies a specific role and scope the person will be taking, or merely recognition of general high competence and excellent service to the organization over a long period.

Eddie Flaisler: Many years ago, I worked for a very old school company that did do one thing I remain a huge supporter of. They were completely transparent about the following formula. You work here long enough. You deliver consistently on whatever we asked you to do, no matter how complicated or simple. You become a mentor. You become a source of [00:29:00] knowledge. Congrats. Here's your staff, senior staff, principal, engineer, promotion. And you know what, Morgan? It was beautiful. No drama, no toxic competition, no wordsmithing of cases, just consistent delivery of the outcomes they asked you for. So, for example, if you're a principal engineer, and for some business related reason, you're asked to do this intern level work for the better half of the last performance cycle, We're not doing you a favor by marking you as meets expectations.

Morgan VanDerLeest: That does sound beautiful. It just feels like the exception to the rule in most cases where that's possible. Because more often than not, at least the organizations that I've been a part in or that I've heard of, there's almost always financial constraint, business value tied to staff plus roles. So how do you bridge that gap where just because you've been there for a period of time, doesn't necessarily mean you're going to get promoted there needs to be some other thing attached to that.

So what I hear you saying is that

Eddie Flaisler: if everyone is doing an okay job and [00:30:00] everyone chose to stay here, we cannot promote everyone. We can't necessarily afford that. Right? Whether financially or we simply don't agree with that Do we do then? And I actually think it's a very valid concern because as I said, that behavior, it's I observed specifically at a certain company.

Eddie Flaisler: I actually will acknowledge that I have not seen that since it's not very typical these days. I do think it's a nice solution, but here's what I will say. Let's say you're not going with this previous approach, and you want these titles to mean something very specific about the work these people do. I believe that much like with management positions, staff plus jobs should be role based.

Eddie Flaisler: In a healthy business environment, you don't become a manager of nothing, right? If there is no domain to be managed, you will not be a manager. Similarly, if I cannot justify a headcount dedicated to architecture or full time tech leadership or, even just having the mandate to solve high level business problems without being bogged [00:31:00] down by work that is typical of more junior scope, I don't think I should promote to that level.

Eddie Flaisler: It's not fair to the person so long as the rubric mandates that they act in this high level capacity most of the time. So I guess what I'm trying to say is this: you are making a very good point for our listeners by bringing me down to reality and say, okay, if we're going to add value to people, we cannot talk about this theoretical model.

Eddie Flaisler: We need to talk about what actually works. My point is, this is what actually works. And we can decide that, nope, I'm just going to keep the rubric. And you need to have this insane business opportunity, but then you need to continue having these opportunities in order for me to rate you as meets expectations. But let me tell you. It doesn't work like that, right? I think one thing I was going to bring as part of my concluding notes, is the statistical concept of regression to the mean. But it fits really nicely at this point. Most of the time, you will not [00:32:00] have these peak performances.

Eddie Flaisler: You will not have these amazing opportunities. Most of the time, work for all of us is pretty mundane. And all I'm trying to say is that if we make the standard for a certain level, the exception, we're setting up everyone for failure. So I am perfectly fine with making a decision as an organization for a variety of reasons that we want to move forward like this, right?

Eddie Flaisler: An organization is a complex organism and there are many considerations. But let's acknowledge the fact that It's not setting up people or the process for success.

Morgan VanDerLeest: Funny that last point you put in there about the organization being a complex organism. I love that. And that's the thing that we talk about. living documentation for features and projects and whatnot. But we don't do that enough with how we performance manage. To risk going on a tangent, your rubric, or your skills matrix shouldn't be this thing that we do once every two or three years or longer and don't come back to.[00:33:00]

Morgan VanDerLeest: It should always be this living and growing and shifting thing, build that into your performance management process that you update the skills matrix or provide examples for the skills matrix of somebody showcasing senior, staff, staff plus of this particular category within the matrix at a particular point in time. The other thing that making this like a living process helps you calibrate management of performance with what the organization is actually doing.

Morgan VanDerLeest: What engineers are actually doing, because it's all fine and dandy to have this beautiful, perfect, awesome skills matrix or this thing that we borrow from another company skills matrix.

Morgan VanDerLeest: If it's not reflective of the current team and the current breakdown of folks that you have within your organization and how they need to act and the things that you need from people on your team to get the business to the place where it needs to be to tie this back to business value and outcome.

Morgan VanDerLeest: You're failing the individuals on your team because they don't have a clear map for what they are supposed to do next or what they can be doing next. But if we can give them these [00:34:00] guideposts with, let's say your matrix is, you know, Consider it a map. The examples are.

Morgan VanDerLeest: Guide posts along the way of, okay, I know I'm doing a good job in this area because it's similar to this thing, or it's similar to that, or I can see how this relates in other ways. How can I make this a digestible and understandable thing for my people to work towards and work through and become the next example of.

Eddie Flaisler: Definitely, isn't what you're saying here perfectly tied with that example you gave about the senior engineer, right? There were certain expectations at that level in terms of, what type of value they're supposed to bring, how they're supposed to identify problems, how they're supposed to figure out the environment, but there was more responsibility on the manager to set up that person for success because irrespective of how senior that person is, it doesn't mean they don't need help and they don't need support from the people with the most context.

Eddie Flaisler: Such as the managers in order to do their job, right? So we were talking about staff plus, I guess one last thing I'm [00:35:00] going to say is just because a person has the mandate to solve big problems. Doesn't mean the burden of finding these problems should be on them, right? So managers are communication hubs.

Eddie Flaisler: We spend much of our time learning about what happens in the business, what happens in the organization. And we do that so our individual contributors, irrespective of level, can focus on their work and rely on us for context. Now, I am in complete agreement that at certain levels, we expect independence. We expect people to work with ambiguity, to work with abstract problems like positively impact metric A which can lead to XYZ business result or directly solve this business problem. But that doesn't mean the individual needs to figure it out alone, right?

Eddie Flaisler: I've worked in environments where the performance rubric clearly expects you to come up with a problem to solve and understand the climate and understand the business situation. And, you know, while it's always great to have this mechanism to collect feedback from the ground about opportunities from the [00:36:00] entire organization, so if an engineer identifies something, I'm really happy. As a measure of performance, Yeah.

Eddie Flaisler: It's simply unfair. I've hired an engineer. I didn't hire necessarily a technical program manager or product manager or an engineering manager or an architect or all these things that have their own scope and their own meetings to attend and their own conversations that they have. So, we need to be mindful about the level of support a person requires to actually do their job.

Morgan VanDerLeest: Part of being part of that larger living complex organism, right? So recap, we laid out the year round groundwork before you get to the actual performance calibration. and discussed recommendations for what data to look at and not look at during calibration itself. Still feels like there's room to go deeper on this, and I'm thinking maybe along two axes.

Morgan VanDerLeest: Let's say you do this exercise you're talking about, where you regularly calibrate on the type of work given to people. I'm not sure what you're going to compare. Is it the difficulty? Is it the breadth [00:37:00] of a task? What is that? And the second axes, let's say the data for a certain org fully adds up in terms of work distribution, but ultimately the goals seem to be very lax compared to what you would expect as, say, the director or the VP.

Morgan VanDerLeest: Like a checkbox takes a month or something. Been there. How do you separate personal biases or agendas from legitimate questioning, even in those regular work calibration sessions?

Eddie Flaisler: Yeah. So, to start with the first question about what to compare, I think this whole exercise becomes much easier, not to say feasible, when you bucket individuals based not only on levels, but also engineering personas. One of my favorite concepts. So, the idea behind engineering personas is that different people naturally gravitate to a certain type of work.

Eddie Flaisler: So, Which is typically a mixture of things they like to do and things they're good at. And the most common ones I know are, Architect, Maintainer, Innovator, and Doctor. You know, any org can [00:38:00] make its own, just be consistent. That's the point. So everyone is measured through the same lens. Anyhow, these personas are pretty self explanatory.

Eddie Flaisler: The architect enjoys design more than the code. The maintainer is not interested in building anything new, but has this amazing grasp on what we currently have, and is very efficient in taking care of the existing system and onboarding others. The innovator mostly wants green field work, and the doctor lives for the outages and bugs.

Eddie Flaisler: I can't even tell you how many times in my career I found myself in this unfortunate situation where I have a person who is an absolute rock star who contributes so much in the scope of their respective persona, and I had no choice but to rank them as underperforming because the rubric expected everyone to be the same.

Morgan VanDerLeest: Really tough. I've seen something very similar and it very much feels like it's this double sided sword. We're trying to make something, a rubric, a skills matrix simple enough that everyone can read it and understand it and [00:39:00] digest it. But at the same time, there's just different ways to look at it and to actually measure someone's performance.

Morgan VanDerLeest: It's not that simple. How do we make sure this is as reflective of somebody's work and impact and exposure as possible? So everything you said there. Great. Love it. But let's bring this back to reality. What if folks are leaning more towards the architect persona, it's very possible, especially when the hiring process tends to bring in people that are similar to those you've hired before.

Morgan VanDerLeest: It's entirely possible that you've hired in a lot of people that fall under a particular persona. How do you incorporate those personas into performance evaluation?

Eddie Flaisler: Yeah. So, as it often ends up being the case and things we talk about, just like you, you kind of mentioned that part of the question, you don't start in the actual moment of the performance evaluation. Any team in any profession has to be a blend of. I don't know. Experiences, maturities and personas.

Eddie Flaisler: Our job as managers is to understand what our team needs when [00:40:00] we hire and when we grow. You can't have a team of only technical leads, right? Because the competition will poison the dynamic. But you also can't hire only juniors because that will reflect in the quality and the robustness of what you build.

Eddie Flaisler: So it's definitely not an exact science, nor can the boundaries of what a certain persona is doing be strict, because, you know, life happens, and now you need Ms. or Mr. Innovator to fix bugs, or some maintainers to put their heads together on something new that needs to be built. Flexible skillset and a can do attitude are mandatory, there's no question about that, but that doesn't mean We should give up entirely on hiring and rewarding personnel based on the right composition for a given team instead of taking this one size fits all approach.

Eddie Flaisler: Once the lens of personas is regularly used when discussing goals and people are bucketed as such, I think you will find it's much more straightforward to compare the work of different individuals. Does that make sense?

Morgan VanDerLeest: [00:41:00] Absolutely. And I'm glad you actually mentioned looking at right composition for a given team, because that's something that in the past, when somebody has asked me, what's the most important thing you look for in hiring: it's team composition. It's making sure that I have the group of people that is going to together be able to deliver on this thing that the business needs.

Morgan VanDerLeest: So excellent point. Now, what about second question? That second axis, the work is distributed well, but it's expected to take a full month to add a checkbox.

Eddie Flaisler: What you're basically asking me is, how much time is too much time to build something? Which, I don't know, it feels like the core of so many organizational disagreements, probably in every single company I've been. And I must confess, I don't have a magic formula either, but I can tell you this. I always consider the biggest and hardest part of my job to be keeping myself, everyone on my team, my stakeholders, and my leadership honest about our engineering efficiency baseline.

Eddie Flaisler: So what I mean by engineering efficiency baseline is like, assuming peak performance of all the people involved, [00:42:00] how long should it take to build a certain feature? And why? And I'm not just talking about Dora numbers like cycle time, but also about acknowledging and making a conscious decision to either actively address or work with the realities that bottleneck our pace.

Eddie Flaisler: And that's probably one of the scariest things an engineering leader gets to do because it always sounds like we're making excuses to the business and it's awkward and potentially harmful to our career. So I think someone needs to say that. In one of my earlier jobs, I worked on a product whose original implementers did not build it with proper logical coupling, which basically means every small change you needed to make had to be made in at least 10 different places in the code. Owned by different teams. You asked me about a checkbox and, you know, coincidentally, that's exactly what I needed to build.

Eddie Flaisler: I estimated three weeks because I was mindful of all the dependency resolution. When my boss asked me why three weeks, and I started explaining, he cut me off and said, these excuses, [00:43:00] Eddie are exactly the reason I'm not promoting you. You don't know how to speak to managers. And when I finished the checkbox in two and a half weeks, I was still reprimanded for being slow.

Eddie Flaisler: And I know for a fact that this same legacy code still runs today. With none of these issues solved and unnecessarily excessive engineering resources going into it almost 20 years later. And I think this in a nutshell Morgan is the story of tech debt. If I manage an organization I have two choices. The first one is to maintain the strong alignment with the business about all the engineering speak issues Slowing us down and to ensure I have air cover and allocating resources to solving them and to minimizing the creation of new ones.

Eddie Flaisler: The second option is to accept that for a variety of reasons, our stakeholders don't feel comfortable investing in anything but forward facing functionality. That's fine, but then we have shared understanding that when my engineer finished a three week checkbox in two, they actually went above and beyond. [00:44:00] Nothing in the middle, but toxicity and blame shifting.

Morgan VanDerLeest: Very much agreed. It definitely feels like the difference in between your one and two is essentially trust, right? It's do we have an organization with different arms that trust each other or do we not? All right. Nearing the end of this episode, I want to make sure we transition to the here and now.

Morgan VanDerLeest: We've covered some great performance evaluation related strategies, but here we are. It's July. Calibration season has started and none of this prep work we discussed in this episode has happened. What do we do now?

Eddie Flaisler: I mean, if you're listening to this, and you're about to enter the calibration room in an hour... all I have for you is my best wishes. If we do have a few days or preferably weeks to prepare, I should recommend keeping a few things in mind. The first one is, God, I'm going to butcher this rely on what James Surowiecki, apologies James if I butchered that - so on what James called the wisdom of crowds. James is this long time financial journalist. He often talks about collective intelligence. And [00:45:00] he argued that the average of a large number of forecasts reliably outperforms. the average individual forecast. So in plain English, invest heavily in soliciting feedback from as many people as possible and listening to general themes. For better or worse, they probably see things you don't. And, by the way, when I say crowd, I mean crowd.

Eddie Flaisler: As people with authority, we tend to think that just because we ask for others views, we did our job at collecting feedback. Creating that safe environment is on us. I have a random anecdote from Uber that I think is very relevant. So back at Uber, I was a bar raiser. Uber has a really nice program. When it comes to hiring engineers called the bar raiser program similar to that of Amazon, where someone from outside the team or outside the organization independently evaluates the person.

Eddie Flaisler: And then is also part of the debrief part of the hiring panel and kind of orchestrates the conversation until [00:46:00] a decision is made. And one of the things they taught us at bar raiser training is that the manager and the bar raiser always go last. Because if you go first, there is this confirmation bias with team members who now feel obligated to say the same things or now feel uncomfortable that they disagree or they think something else.

Eddie Flaisler: The point is you need to be very mindful when soliciting feedback about creating that environment, about making sure people have that opportunity to share transparently how they feel. So that's one thing, that's about the wisdom of crowds.

Morgan VanDerLeest: You know, this makes me think of another one. The how is as important as the what. As someone's manager, you ideally have a good grasp of what someone was asked to do, why they did it. But something that's worth probably more research, but that I think we have an inkling of is how someone's work impacted other employees. You know, there's the Netflix culture memo that was popularized that stated, what was it? [00:47:00] Zero tolerance for brilliant jerks. But in my experience, except for extreme, borderline, outrageous, or illegal cases of behavior, it's just not taken seriously enough. And that's a shame, because even if somebody excels at the what, everyone else's what suffers. It's And that sucks. You know, this is something that. I want to call back to our episode around values and shout out to my time at IMPACT, which I've mentioned before, they did a really great job on their job scorecards of including culture, values, behavior, inb the way that they measured employees.

Morgan VanDerLeest: So it's not just your, what, which is along one axis and your ability to deliver on outcome, but also how did you do that thing? And if you're a jerk or people leave a meeting or leave a session or come out of a project feeling like you undermine them, ruin time with them, didn't live up to what the company expected of you.

Morgan VanDerLeest: That counts against you. And as long as you've set up values in a good way, it should, but at the very least, don't be a jerk.

Eddie Flaisler: Yeah, I could not agree more.

Morgan VanDerLeest: All right. What else [00:48:00] Eddie?

Eddie Flaisler: So the next one is difficult. So understand that it is your responsibility towards your team and domain to rank. I know it's terrible because we're dealing with people here, and they do their best and you probably know some stuff about their personal lives, which humanizes them even further, but you need to remember that in a typical business.

Eddie Flaisler: Budget is always a constraint no matter how well the business is doing and you have to make the decision on who to invest in further. If your view is that everyone is excellent and deserves the same amount of investment, that is not necessarily incorrect, but is a flag in and of itself, because it raises questions about underutilization of the people, right?

Eddie Flaisler: If everyone is amazing, is the work on par? Where's the challenge? And also, is this the right organizational investment, right? To put all this capacity in just one team.

Morgan VanDerLeest: Also think it's never too late to implement the things that we've mentioned at the beginning of this episode, particularly separating reward and punishment from the performance cycle, [00:49:00] especially punishment. If you have performance issues in your team, address them now. And if you haven't, and you waited this far, give a little space between that and the performance calibration. You know, performance season tends to be this cause of fear and stress amongst folks. It impacts people and obviously the work that they're working on. It's that whole, whole workload during typical evaluation season, but you don't want the season to cause that. And it doesn't need to cause fear and stress, if we go about things a little differently. Any final points on your end?

Eddie Flaisler: I can think of one more thing, we kind of already alluded to, and then I have something specifically for executives to think about. So the last thing is this earlier in this episode, we talked about regression towards the mean. The exact definition is that when one sample of a variable is extreme, the next sampling of the same variable is likely to be closer to its mean. So let's translate to English just because one of your engineers previously saved the company 1 million in [00:50:00] AWS costs.

Eddie Flaisler: It doesn't mean everyone else sucks, nor that this person is now underperforming and deteriorating simply because they were unable to hit that peak again. Extreme achievements are exactly that. Extreme. And we should be very mindful of how we benchmark what success is before we negatively impact a perfectly adequate software engineer simply because the highlight of their cycle was improving the responsiveness of our homepage.

Morgan VanDerLeest: Great point. The best we can do to bring about more objectivity and consistency in the way that we talk about folks and how the extremes, while great for the business or for a team, those are not the norm. And as long as our general trend is, you know, raising the bar, improving, getting better, holding those extremes as the new bar is a recipe for sad people.

Eddie Flaisler: I could not agree more.

Morgan VanDerLeest: Now, how about that last word to executives?

Eddie Flaisler: Yes. So I think to close this episode, I should probably address the elephant in the room, which I [00:51:00] know is a very big problem for a lot of engineering executives. We have to be radically transparent with the managers reporting into us regarding what happens after a performance based termination occurs, either because you let a person go or they resigned having not taken well their rating and say denial of merit increase.

Eddie Flaisler: If you say you're going to backfill, then you set a budget aside for that, and you make good on your word. If you're not sure you'll be able to backfill, you share with that manager your strategy for reducing the load on the team, as well as how you will continue supporting that manager's growth despite the reduction in scope.

Eddie Flaisler: Now every time I say this, at least one person rolls their eyes and says, but Eddie, reality doesn't work like that, we can't always reduce the work, they'll just have to work harder, and the manager will just have to deal. And my typical response to that is: funny you should talk about reality because here's the reality.

Eddie Flaisler: Well, if managers know that addressing poor performance means their team will shrink along with their own career story, [00:52:00] but the load will stay the same and they will receive no support. Why would they, why shoot yourself in the foot? The second thing is. Underlying the team will just have to work harder statement is the assumption that the team is not already operating at full capacity.

Eddie Flaisler: Now, there are two options. If that assumption is incorrect, we can say they'll just have to work harder all we want. Reality will end up biting us. If the assumption is correct, and you know that for a fact, then the time to act on it is not when someone leaves. The deal is this. We all know that people like stability and they're often averse to change, right?

Eddie Flaisler: We also know that in a business, change is constant and inevitable. So the fact that there will be changes is something we cannot control. But there is something we can impact. And that's our org's sense of psychological safety when such changes as a reduction in team size happen. And the thing is, you don't achieve that by simply using some corporate language.

Eddie Flaisler: You achieve that by demonstrating to [00:53:00] your team that they are in the hands of a competent, thoughtful leader. And in those hands, when the size of the team is reduced, it's because someone worked with the team to align scope and size, not because the departure of a single individual caused all the dominoes to topple.

Morgan VanDerLeest: It's such a great point. And one of the things I've learned in my career is that transparency builds trust. And when you are transparent with your team about the reasons behind your decisions, it makes it easier for them to understand and accept those decisions, even when they are difficult. To the listeners, if you enjoyed this, don't forget to share and subscribe on your podcast player of choice.

Morgan VanDerLeest: And we would love to hear your feedback. Did anything resonate with you? More importantly, did we get anything totally, completely wrong? Let us know, share your thoughts on today's conversation to peopledrivendevelopment@gmail.com. Until next time. Cheers, y'all.

Eddie Flaisler: Bye!

​[00:54:00]

Creators and Guests

Eddie Flaisler
Host
Eddie Flaisler
Eddie is a classically-trained computer scientist born in Romania and raised in Israel. His experience ranges from implementing security systems to scaling up simulation infrastructure for Uber’s autonomous vehicles, and his passion lies in building strong teams and fostering a healthy engineering culture.
Morgan VanDerLeest
Host
Morgan VanDerLeest
Trying to make software engineering + leadership a better place to work. Dad. Book nerd. Pleasant human being.
People analytics
Broadcast by