How predictable can software engineering be?

PDD: Episode 5 - How predictable can software engineering be?
===

[00:00:00]

Introduction
---

Morgan VanDerLeest: All right, let's see what we have this morning. Dear PDD, I work for a company you might actually know. In January this year, I was put in charge of a team that was assembled ad hoc for the purpose of shipping a new AI based internal product for customer support. My tech leads original effort estimate was two months, but we're already approaching June and things keep getting pushed.

Morgan VanDerLeest: I would say we're not even halfway through. My boss is extremely dissatisfied and more than she's upset about us not delivering, she's disappointed that we can't seem to give her a date and actually stick to it. What do I do now? How do I get this thing out the door and also convince her I'll do better next time?

Morgan VanDerLeest: Signed,

Morgan VanDerLeest: White Rabbit.

Eddie Flaisler: Hm. White rabbit as in "always late.". I mean, for one, they can change careers to copywriting. But, anyhow. Cue the intro. Let's do this.

Morgan VanDerLeest: I am Morgan.

Eddie Flaisler: And I am Eddie.

Morgan VanDerLeest: Eddie was my boss.

Eddie Flaisler: Yes.

Morgan VanDerLeest: And this is PDD: People Driven Development.

Morgan VanDerLeest: Okay.

Understanding the Predictability Problem
---

Morgan VanDerLeest: So do you even think we have the full picture here? [00:01:00] It sounds like this delay can stem from a number of issues.

Eddie Flaisler: Well, you know, this is one of those questions that can have potential answers in every realm of management, right? Like it can be planning, performance management, developer productivity, whatnot. So maybe instead of reading too much into it, I think let's address the concern directly. This is a predictability problem, right?

Eddie Flaisler: And since we don't have more context, I say, let's try to run through potential causes and hope something lands for White Rabbit.

Local Issues Affecting Predictability
---

Eddie Flaisler: The way I always look at issues regarding predictability is by dividing them into local issues and global issues. And by local, I mean, inherent to how the team approaches the project approaches the work and global is how the system or the environment approaches the work in general, right?

Eddie Flaisler: So it's not just about the team and how they're doing the project. It's everything that happens around them. For local issues, let's start with the most obvious one, which I don't think it's typically the main issue, but let's get that out of the way. [00:02:00] That's lack of systematism in making the effort estimate.

Eddie Flaisler: You know, so many of the managers I've met say that humans are notoriously bad at estimating. And that's probably true.

Eddie Flaisler: We're not very good at that. But the thing is, it doesn't mean we cannot try. You can try to break the work into smaller pieces, or as small as we can think of. It's not always that easy. We can identify what we know how to do. And to be honest about what we don't: unknown unknowns, you know, as we say. We can agree as a team on each piece and have a factor to multiply the time estimate based on our level of confidence. It's not an algorithm, really. It's more like a heuristic, which is based on the team's familiarity and their dynamic. But there is a system we can put together to do that in a way that is at least defendable for us, right?

Eddie Flaisler: And things like remembering to incorporate time for testing, logging, observability. You really do need a spreadsheet. I feel like people are so often [00:03:00] pushed into giving a number. And by the way, that pushing doesn't have to be external. It can be their own need to respond or come across as confident.

Eddie Flaisler: And they say something, and that something is not well thought through, or is making some assumptions which are not shared vocally. So, that's why this heuristic is important. The second point is actually kind of related to the first one. It sounds very idyllic, what I'm proposing, this methodical way of thinking about effort estimates, but it might not actually be possible to do the first thing I said because team members and I've encountered that quite a lot refused to commit to anything or refused to articulate their thought process around that. And then what happens is that instead of trying to understand where this reluctance comes from, and if you fail to understand where is this coming from performance managing that situation, the manager ends up providing an estimate in a vacuum.

Eddie Flaisler: The team was not able to give me something. So I'm gonna come up with a number and then we're gonna deal with the consequences later. That [00:04:00] happens way too often. Speaking of the manager, the manager can also have an issue with their confidence. seen too many cases where an engineering manager and even a senior engineering manager. And to be very honest, I was in this situation myself. When I was a younger manager, and you know, when the VP comes to me and ask me for an estimate, we don't find the confidence to articulate and justify a certain estimate to our leadership chain. So the following conversation is very typical. I asked, how long is it going to take?

Eddie Flaisler: And the person says, three weeks. And I say, okay, why three weeks? And then they say, okay, three days. But I did not ask for less. I asked to understand why three weeks. Now it sounds funny, but I've encountered it over and over again. And it really is, you know, Morgan, you and I always talk about trust. It really is starting from the assumption that when we ask why we want to know why. It's not a passive aggressive form [00:05:00] of saying this isn't going to work. I think we're engineers, we need to be able to break down and explain our reasoning. The next thing I can think of, which is probably one of the biggest thing, if not the biggest reason for lack of predictability, is ignoring everything else besides the actual task while delivering the estimate.

Eddie Flaisler: So ask any person to estimate a project and they will envision themselves working just on that. I personally don't know how to factor in on call or out of office or the time cost of the brain settling from excessive meetings or instant messaging. And we can discuss the calculation later, but I can tell you I've done the math in the past.

Eddie Flaisler: And when you put any effort estimate on the actual team calendar, you don't need to change anything or add anything else. The time to deliver doubles. In the sense that if we're now on February 1st and we committed to 30 days, we end up in early April, just when you finish incorporating reality. And I think the [00:06:00] last thing I can think of speaking of dates is dependency resolution.

Eddie Flaisler: You know, this person is saying we know the company. So let me guess. It's not, it's probably not a three people company. You have partners that need to prioritize you. It can be the infrastructure or DevOps teams. It can be data pipelines. It can be security engineering. And even if they went through this best practice of assembling this cross functional team who knows how to do everything, you know how it works, right? Someone in the home base of this person from the security team needs to make time to help them with X.

Eddie Flaisler: So busy wait can add up. I think that's my, you know, top list. What about yours?

Morgan VanDerLeest: The best managers and executives are the ones that know it's all relative. It depends on the teams within your org and the individuals on those teams and how they're doing in correlation to one another within the team. And being able to compare teams is not necessarily a helpful thing.

Morgan VanDerLeest: So if we're looking at an individual [00:07:00] team, if I'm looking for estimates, I want the estimate to come from an engineer, the people actually doing the work, if those aren't the people giving estimates, my estimate is going to be wrong, it just is. On the same level people need to feel like they're trusted to give their best effort estimate, but also know that it could be off for various reasons.

Morgan VanDerLeest: And this is a muscle you need to build, that estimation muscle. And it's something that, if you're coming down on a team for not getting things on the timeline when we expected or that you're holding performance management against folks for not getting their estimates done well. I'm sure you're going to get into this later with some of the global issues, but there's a lot of things that come into play for an estimate being predictable or not. The best thing we can do is get that estimate from the people actually doing the work. How long do they think this thing is actually going to take to do, help them break it up into meaningful slices or things that make sense.

Morgan VanDerLeest: And be open to that shifting as we go. I would much rather have regular checkpoints where somebody's surfacing either a better [00:08:00] alternative to what we've already planned or having them raise the flag earlier on so I know within a day of something going off the rails versus 2 weeks later, when we say it's going to be done. That's when your predictability suffers.

Eddie Flaisler: I feel like what you just mentioned is probably the most foundational point. And we should have probably started from that because like we mentioned when we discussed what is an effective manager, why we need a manager anyway, It's all about the type of relationship and the type of culture you create.

Eddie Flaisler: So, you know, we always talk about, or we are in general, the industry talks about managers as coaches. And an interesting question is. Coach for what? For how to do documentation? Sure, here's an example. For how to do coding best practices, you have ChatGPT. So what are you teaching there?

Eddie Flaisler: And I feel like one of the main things a manager can teach is how to think of bigger problems in a digestible scale. And communicate progress and break it down and talk to different audiences and set expectations correctly. [00:09:00] And you cannot just teach that in a vacuum. You cannot just teach that in the one on one with the engineer.

Eddie Flaisler: You bring them to the table and guide them and help them as they present and represent the idea. So I think the example you gave about effort estimates is brilliant. I can have an exceptional engineer who's not amazing at talking to different audiences or explaining things to executives, but they know the system best and they also have a very good sense for how long things take for different team members. You can sit with them at the table where the conversation is being held, get the input and translate. It's not about saving face. We are all one team here. So you give them a seat at the table, you help them show up, and also they get visibility for their work and everything that they do.

Eddie Flaisler: And you give them an example for how the communication is done to the different levels, because otherwise, Nobody's ever gonna learn. So I'm completely with you. It's about understanding that [00:10:00] you cannot just compare teams against each other with the same scale. Every team has the unique challenges and settings, which are probably mostly articulatable by people working in the team, engineers in the trenches. So they need to be brought to the table.

Eddie Flaisler: That's a really good point.

Global Issues and Organizational Challenges
---

Eddie Flaisler: And I think, Morgan, it actually leads very naturally to the global issues I was thinking about, which is kind of symmetrical to the local ones. So. Think about it this way.

Eddie Flaisler: Have you ever worked in an engineering organization which deals with either a lot of production issues, you know, to an extent you're always juggling between building something new and putting out a fire with something you have on the field?

Eddie Flaisler: Or is it just me?

Morgan VanDerLeest: For sure, I've been plenty of places where there's production issues on a regular basis.

Eddie Flaisler: Cool. Have you ever worked in an environment where, asks from the team keep coming up from above? And, you know, you need to add them to the top of the list rather than to the bottom.

Morgan VanDerLeest: Of course,

Eddie Flaisler: Good. Everyone listening can assume he's just going to say yes to everything else, because [00:11:00] that's, that's kind of the thing.

Eddie Flaisler: Have you ever worked in an environment where direction changes literally happen every few weeks.

Morgan VanDerLeest: That's a trigger warning, but yes.

Eddie Flaisler: Exactly. Now. You tell me if one needs to be an organizational psychologist or an operations researcher to realize that whatever thread you're trying to deliver can take an unbounded time to be completed when this is happening because priorities keep shifting. People keep shifting.

Eddie Flaisler: You never know what to expect at the order of what you need to do changes. You start things, you stop things. It's very, very difficult to maintain focus. And that's why I always say that if predictability is important to an organization, not just as a nice to have, but as a mandatory need, as in the example of commitment to strategic customers.

Eddie Flaisler: So people or teams who work with enterprise customers, for them, predictability is not just a nice to have or something to aspire to. It's their livelihood. They're either able to [00:12:00] offer the customers something by a very specific timeline, or they cannot.

Eddie Flaisler: There needs to be alignment all the way to the top that whoever is assigned to that effort here is protected from unpredictable distractions. So meaning you can factor in on call, you can factor in answering questions. I'm not saying these people should sit in a basement.

Eddie Flaisler: There's work to be done, but you won't keep throwing random ideas at them. You won't keep pulling the rug from under their feet and them being surprised that nothing gets done. And this is not an imaginary utopia. This is the only way it works.

Eddie Flaisler: You made me laugh with the trigger warning here, do you want to tell me a little bit about your experience?

Morgan VanDerLeest: Need to collect myself for a moment. My heart is still racing a bit from the change directions so much. There's multiple layers to this. Cause you can build processes and things that will help with on call and answering questions and who's responsible for those things.

Morgan VanDerLeest: But priorities shifting, that's not a team's [00:13:00] problem. They don't have control over that. That's leadership. That's the group that's saying here's where we're steering the ship. And if you keep jerking the ship around, no, one's going to be able to get back to their post on the ship in time to do anything about it. It's interesting because I've seen all levels of this from being an engineer up to in the leadership of the company of how it feels to get thrown around like that. And to also be in a situation where there are less shifts. And things are more calm and yes, there are changes and you need to be able to adjust to things, but doing so in a way that doesn't jostle the teams around so much that they can't actually do anything.

Morgan VanDerLeest: And then getting upset that they're not getting things done in a predictable manner. That's that's on the group running the ship, not on the folks funneling into the boilers.

Eddie Flaisler: Absolutely. I could not agree more. I think distractions can be external and can be internal. A very typical type of distraction that is inherent to a project is unreasonably frequent change of requirements. Like you mentioned, that is on the [00:14:00] leadership team, whether executive or just directly engineering and product that works with the engineers, and it mandates rework and it wears out the team.

Eddie Flaisler: And, you know, because research wasn't properly done to begin with. Now we're in a situation that everyone is running in all directions, and even if they're staying on the same direction, they feel like they need to start from scratch, or like they haven't looked at the problem properly to begin with, which sometimes is inevitable, but not always.

Morgan VanDerLeest: It's interesting that you call it out because you can put a ton of time into planning, but we can't plan for everything, can we? No matter how hard a product owner or whatnot tries, there's always going to be things that we don't know.

Eddie Flaisler: So that's actually a very good point, Morgan, because I've personally heard many engineering leaders complain that when they bring up a concern regarding frequently changing requirements, it comes across as a little bit of blame shifting. You know what I mean? I don't think anyone can be expected to be a Mozart and flush out a full symphony with the first stroke of their pen.

Eddie Flaisler: That's not the idea. It's about the [00:15:00] increments you decide to work in. You know, in one of my previous jobs, I've run an organization where several teams were in the critical path of everything that was developed across engineering. Literally every other week, there was a fire drill. Someone needed something right now.

Eddie Flaisler: And when I say right now, I mean, right now. Our planning cycle in that company, which is very typical for enterprise companies was quarterly. So you used to say what you're going to do in quarter X. And, you know, I found myself repeatedly having to choose between us either being late to deliver practically anything and everything we committed to, or protecting my team's time to an extent that would deem me uncollaborative to my peers.

Eddie Flaisler: Do you want to do this? No. Can you do this? No, we need this. No. So there are all these books about saying no, but ultimately what people don't like to talk about is that when you say no, nobody likes you. Nobody likes a no, right? Even in the personal life, boundaries are hard. So I had no choice.

Eddie Flaisler: I looked at how often we ended up changing direction, [00:16:00] or like in the sense of not just the direction of what we're actually working on, but just having to work on something else. And I decided to shorten the cycles of what we were committing to six weeks. So people got to finish whatever they started with the higher probability.

Eddie Flaisler: Now, why is this relevant to the question? Because with the shorter scope, something magical secondary happened. Requirements quality improved. And if you think about it, it's not very surprising because it's easier to go deep on a smaller set of requirements.

Strategies for Improving Predictability
---

Eddie Flaisler: So working in increment sets you up for success.

Eddie Flaisler: Over the years, there have been a lot of arguments for and against agile methodologies. And I don't remember if I said that on one of our previous episodes, but I actually think agile is great if implemented correctly.

Eddie Flaisler: Agile, it's not just about the ceremonies. But it's also not just about working or iterating until it's ready, because life doesn't work like that. We have [00:17:00] requirements and timelines, and sometimes you don't just develop and iterate. You are told what the revision is going to be, especially if you're working with enterprise customers.

Eddie Flaisler: But one of the Agile principles I do believe in, which are not strictly followed in my mind enough, is the idea of small increments. Only do something you have a good sense of what's it going to look like at the end, what are the possible challenges. Like what are the edge cases? This understanding inherently develops over time as we build something, which is the reason we don't start with version five we start with version zero. And I feel like everyone tries to do that, but not really. Something to think about.

Morgan VanDerLeest: You know, I love that you bring up the smaller increments because something that I've come to realize over the years is the value of adaptability over efficiency. There's a certain level of software development that just involves [00:18:00] uncertainty. Types of work may be similar, but at the end of the day, this is a profession that is not like manufacturing engineering.

Morgan VanDerLeest: Those are workplaces where you have a number of machines, they do a particular thing and you can crank out as much volume as you want, trying to get the lowest possible error rate and you keep optimizing to get there. But that is not how people work. There isn't an exact same machine.

Morgan VanDerLeest: There's no exact same output. And the few times that those, kind of inputs or outputs may be very similar, most engineers are going to get bored. The whole profession in my eyes is still more art than manufacturing. So how do we create processes in an environment that supports flow, creativity, peak brain function. Do those things well, and I think you're on your way to getting to more predictable outcomes. And now what do I mean by that? So instead of saying, here's the project that we planned out, it's going to take us four weeks to do, and here's all the things that we're going to get done in that time, chances are there's going to be unknowns within that.

Morgan VanDerLeest: Somebody may need to take time off within that things are going to come up. There may be a change of priority within the companies. All these [00:19:00] pieces can happen. How do you either shrink that thing down into those meaningful and deliverable slices and have the team be sharing up as this is happening?

Morgan VanDerLeest: What are they discovering as we go? Are there different routes we can take? Are there are things coming up that we need to be able to handle and adjust and say instead of. At the end of those 30 days saying, Oh, whoops, we're two weeks behind at the end of week one. We can say, Hey, this is going to take us an extra 10 to 14 days. That's a way bigger difference than finding out at the very end. And so being able to adjust and communicate and work well within that is much better than just saying, here's our four weeks. Here's exactly what's going to happen within those four weeks. It's going to work this time, I promise. Never happens that way.

Eddie Flaisler: I think you're so right. I love that. And I think that conversation, can be induced as a team practice, but underlying to it is the concept of psychological safety, right? We've, we've previously mentioned the manager Who's scared to tell their own leadership team that it's not going to be ready in two [00:20:00] weeks, it's gonna be ready in four weeks. So they just say it's gonna be ready in two and deal with the consequences later. It's not necessarily because the manager is unprofessional or not well intentioned. Let's be honest, many environments are not conducive to the type of psychological safety that allows you to articulate your concerns and articulate your true thoughts about the real complexity underlying for a certain piece of work. I think we quoted Deming at least once in this podcast, when there's fear, you get bad numbers. So the idea is, you can't control how every single person, how every single manager is feeling, is dealing with difficult conversation, but you can make sure that you run an environment as an executive leader or just a more high level leader where people have the space to talk about complexities and things taking longer or being more difficult than initially [00:21:00] assumed.

Morgan VanDerLeest: You know, Eddie, that brings up an important point that I like to try to impress wherever I am, is the idea of acknowledging reality. There's all the stories in the industry about the reality distortion field of Jobs or all these things where like, ah, if we just will it into existence, we can do it.

Morgan VanDerLeest: There may be some exceptions where that works, but in reality, we need to be able to say, this is what we're working with. This is what we're able to do. And this is the likelihood of us actually being able to get to that end game. And being able to start working with that. Even just being able to say as a group, we do expect our priorities to shift some, because we're still working on that.

Morgan VanDerLeest: We're trying to either catch up with the market or our customers are very demanding to be able to say yes, this is our reality. This is the external pressures that we can expect within this organization. And then say, How do we build our processes and structure and understanding and give the context of the teams that these are the things that are not going to change.

Morgan VanDerLeest: How can we operate well within this? I think that makes a very big [00:22:00] difference to how teams are able to work well, individuals are able to work well knowing that they have some kind of scope boundaries, things that are even if they are kind of the external pressure, they're understood and clear. And this is why.

Morgan VanDerLeest: And then it's, what is it? Constraints breed creativity or whatever, where by having these things, we can say, cool, these are the things that we cannot shift. Let's figure out how to do the best that we can knowing that this is our reality.

Eddie Flaisler: I love that.

Cycle Time and Metrics
---

Morgan VanDerLeest: You know, one of those things that always comes up when you talk predictability is predictability metrics. We want to cover that?

Eddie Flaisler: Yeah, I think we should because, you know, it's not just about the numbers, but we discussed today a lot about the human factor, and I think these two concepts are very closely related. So we work in an era where there are so many tools and platforms to measure productivity and flow. And for the purpose of increasing predictability, I think we can focus on just one metric: cycle time or cycle time variance, but we'll get to that.

Eddie Flaisler: Everyone's talking about cycle time. The idea is [00:23:00] to measure how long it takes from the moment you start executing on a work item to when it hits production, and that symbolizes closing the cycle.

Eddie Flaisler: Now, at a surface level, it sounds like this focuses more on velocity and not as much on predictability. But the thing that's really interesting about cycle time isn't the number itself. It's the contributors to the number. Say the time it takes to review code or to run a build as well as the variance of cycle time, because that's often where the bone is buried.

Eddie Flaisler: You know, I recently saw a post by a former engineer in my team who asked, let me find it here. Do you have a positive experience to share with rolling out Dora metrics in a way that felt psychologically safe to the engineers being measured? And that post really was an eye opener for me because I always sensed when I was working with my teams on Dora metrics that there was some resistance. But I'm kind of embarrassed to [00:24:00] say, I did not immediately realize that the metrics themselves and the measurements were making people uncomfortable and, I'll just read to you what I responded to the engineer on the post.

Eddie Flaisler: So reflecting on my experience, I don't think it's about the rollout process itself. There needs to be underlying trust between the engineers and their management chain regarding the fact that learnings from this type of measurement result in systemic improvements to boost productivity as opposed to individual performance management.

Eddie Flaisler: This trust isn't created through words, but through what the leadership team does and does not. So does is drives investment allocation in areas that will improve development productivity. Does not use these and other related numbers as a context-less justification for rating someone is a poor performer. And my biggest takeaway as a manager is that if I'm not convinced we as a leadership team are actually interested in making these investments and stay mindful of misuse.

Eddie Flaisler: I'm not [00:25:00] going to implement, I just don't think these metrics will be helpful. I do wan t to talk about the mechanics of cycle time and cycle variance a little bit, but do you have some comments on that?

Morgan VanDerLeest: I really appreciate your response to that post there. This is the thing that it always comes back to you with data for me is if you tie data to a specific thing and ignore everything else around it, you're using data wrong. It's more about what does this tell us?

Morgan VanDerLeest: Relatively in comparison to the other things that we know, or think that we know. And how do we make adjustments accordingly? So cycle time is a great one, or cycle time variance, where it's less about this one person's cycle time and more, how do we improve our cycle time as a team?

Morgan VanDerLeest: And as an organization, what are the inputs to that that can make a difference? I've done some things in the past, like for this month, we're going to focus on like our PR review time. And every morning or whatever your cycle is, everyone put some time on your calendar, whatever time of day is best.

Morgan VanDerLeest: You're going to go in, you're going to review PRs at a certain time. And hopefully that kind of spreads it out enough that we're [00:26:00] getting PRs reviewed regularly. That's the kind of developer productivity thing that we focus on for a particular month. And then we shift to something else. And hopefully enough of that is built into a habit that it ends up improving that time. And do kind of similar things like that over a certain period.

Morgan VanDerLeest: And that that's not calling out an individual. That's not saying, Hey, you're not doing, you're not working fast enough, but Hey, as a group how can we contribute to this thing?

Eddie Flaisler: I think this lands really well, and it's perfectly aligned with what I'm trying to say because my point is that these metrics are mainly useful not in deeming a team as good or bad, but in identifying the bottlenecks holding the team back for unbounded periods of time, hence the variance, right?

Eddie Flaisler: And it makes it difficult to commit to anything. Because you never know what you can learn from the number or what you can't. Because if you look at the graph and you see that the different data points that lead to a certain average differ significantly, you can't really tell if next time you're going to try to rely on this number, [00:27:00] you're not going to see more of the high points.

Eddie Flaisler: Or more of the five minute point, right? So the overall average isn't very helpful. The variance, on the other hand, can help with questions like, you know, are our resources being effectively allocated? Speaking of protecting time. Maybe you're finding out that in this point, where everyone finished really quickly, we let them do their job.

Eddie Flaisler: In another point, it was simply because there were so many things that came in in the context switching ate up their time. Do we have quality and observability problem where fixing an issue can take anywhere between five minutes and five weeks? I think probably every single engineering manager has a story or even every engineer. about this really, really, really ridiculously simple issue, which ended up taking weeks, if not months, to identify. And it ended up being a change of one line of code. Is this always preventable? No, but I think we can all agree that there is enough work today that can be [00:28:00] done, whether it is logging or some forms of observability or output in a certain way or certain debugging modes that can help you get to these answers faster when it happens in production.

Eddie Flaisler: And of course, another thing, cycle time variance can help us see is things like, are we responding to a customer request or to an escalation in a way that gets in the team's way?

Eddie Flaisler: Do we have just this first come first serve methodology? Or do we look at priorities differently? You know, if someone yells louder, do we approach that? Do we have a consolidated model of responding as opposed to just being on slack on time and refreshing. One of the things we did at Lob, which I loved was when the teams started using a portal through Jira to accept tickets for dependencies, as opposed to just constantly refreshing slack to see if anyone needs anything.

Eddie Flaisler: And then obviously content switching goes to hell and you can't actually focus. The last thing I'll probably mention, which is indirectly related to cycle time [00:29:00] variance is work in progress. I think one thing which has so much research around it in operations research and in lean development, lean manufacturing is the notion of work in progress as a factor in cycle time and cycle time variance. So if you think of Little's Law, which is so popular to bring up in conversations about lean development, I don't think for the most part we actually implement the learnings from that law. What it basically says is more work in progress, the longer it takes to finish each work stream. Which seems kind of intuitive, but not really, because people sometimes think, okay, I have X resources, I have Y projects.

Eddie Flaisler: I'm going to divide X by Y. Cool, That's it. But it doesn't actually work like that. And so much research shows that the more focus areas you have, the longer time it will take you to finish each thing. So sometimes it's good to actually do less. All [00:30:00] these learnings do happen when you observe the data. When you just use the data to decide, Oh, wow, this person has really high cycle time, they're doing a bad job, that's counterproductive.

Morgan VanDerLeest: You know, Eddie, I love that you bring that up this goes back to that adaptability over efficiency. Because if you think about it, say we have a team of eight people. That means we can have eight projects going on at one time. This is going to work great. That sounds efficient.

Morgan VanDerLeest: Never the case. Now there's a balance there because if you say we're going to work on one project at a time with eight people. It's probably also not going to go that great. So for your individual team and within your organization, each team may have a different balance or a different ratio there.

Morgan VanDerLeest: What's a rough ratio of projects that the team can handle at a time, taking into consideration they have on call, or they're working on bug and support tickets, but they have somebody who's responding to customer requests. So in that sense, you're bringing your number of projects down, because that's technically this other work stream that's happening within your org. So being able to think about things in that way. Yeah, I love Little's Law, WIP, theory of constraints, thinking through how do we make sure that, going back [00:31:00] to the global and local bits on a global scale, how do we get more globally done?

Morgan VanDerLeest: That's not by optimizing for the local. That's not by squeezing as much out of one person. That's by finding that balance for how do we get teams to be able to operate in the best way to get the most done in a predictable manner and out the door.

Eddie Flaisler: Absolutely. And I think sometimes it's kind of what you mentioned with eight people, eight projects. You know, one rule of thumb I used to have in many of the teams I managed is that. I put at least two people on one work stream.

Eddie Flaisler: And oftentimes, at least on paper, it wasn't the most efficient thing, but actually it really is. Because first of all, two people can give better estimates. They can balance each other. They can support each other either emotionally or intellectually or cover for each other if one of them is not available. They can learn from each other.

Eddie Flaisler: There are so many ways in which working not in a silo benefits you and benefits the product and the team. It's even better for [00:32:00] predictability. Think of it from a probability standpoint, the more chances you have someone finding the courage to raise a flag, the more data points you'll have to see when you track through Jira or whatever that something is not moving, the more conversations people will need to have with each other, which prevents the situation where a person gets stuck on an issue and just sits in front of their screen indefinitely for a few days and doesn't let anyone know.

Eddie Flaisler: I'm totally with you. Putting people first is actually putting the business first. Which is exactly what we've been trying to say this entire time.

Morgan VanDerLeest: The other funny thing about putting two people on a project is there's this assumption that, Oh, once we finish the project, we're done with it. That's not how software works. You're going to have to come back to this thing. If you want a better version of that project done the first time.

Morgan VanDerLeest: So you have less to come back with later and it has less issues because, Oh, Hey, having two sets of eyes and two brains work on something tends to get to a better result. It's hard to understand this seems less efficient, I have two people on this [00:33:00] but you're adding that predictability later on, because even if things do come up with this piece of software later on, which they will, there's going to be less of it. These two people have thought about this. Good reason for to have diverse teams also is that you have multiple viewpoints thinking about how could this thing go wrong?

Morgan VanDerLeest: What are the edge cases? What are the issues that one individual couldn't have thought of on their own, even if they're brilliant. There's always things. How do we catch more of that up front? You have more than one brain on it.

Eddie Flaisler: One hundred percent.

Final Advice for Listener
---

Morgan VanDerLeest: So we've been a little strategic here. Let's get back to the tactical. What can White Rabbit do to ship this thing?

Eddie Flaisler: You know, at this point, I don't think there's a way around managing expectations. Which basically means this. Number one, they probably need to reduce scope. So sit at the table and say, out of everything I have left to do, what is the subset that we have the most confidence about? So you reduce scope and then you do the due diligence we discussed.

Eddie Flaisler: for that smaller scope. It can be the technical challenges, the unknown, the [00:34:00] requirements you take into consideration, typical interruptions. You get that data and you build up the courage and you talk to your leadership team about what you can do for sure by when, and that for sure can be a very small subset of the original thing.

Eddie Flaisler: But the for sure part is more important than the size. So you come prepared both intellectually and emotionally to articulate the challenges in predicting everything else at this point. You do commit to what you can do, and as the team progresses to the current iteration, you can work with your product partner to mature the next iteration and communicate with your leadership team when you're ready.

Eddie Flaisler: So, I really don't think there's anything else I would expect as a leader from a manager reporting in to me who's currently struggling with this then to come and say, this is the increment I can give a predictability on, I understand it's not easy to hear, and I'm sorry I did not communicate that earlier, but at this point, we have a lot of unknowns, which we're not able to [00:35:00] articulate previously, to ourselves even, that they are unknowns.

Eddie Flaisler: Here's what we know for sure. Here's how much it's gonna take. Here's what it is contingent on. So what dependencies do we have in place and what assumptions we're making? And as the team chews through this iteration, I'm gonna work with my product partner on the next one, and I'm gonna communicate to you a date by X.

Morgan VanDerLeest: Love that. And I want to restate that takes courage. It is hard and it is uncomfortable to go to leadership and do that. But that is what needs to happen if you want to start regaining trust. The thing that I would recommend they do, and actually this calls back to a conversation you mentioned having earlier: it'll take two weeks. Why? Okay. It'll take three days from a manager perspective, when you are talking to your reports and your team about estimates, dig into that why. Ask questions and help them be able to support why they need the time that they do. You're not looking for them to shorten the time. It's great, if that can happen, sure.

Morgan VanDerLeest: But that's not the point. The point is I need two weeks. [00:36:00] Why? Well, first. I need to better dig into a plan out XYZ pieces of it. I need the time to actually do the work. I need the time to write the test and the documentation, and then I need two days for QA at the end, just to make sure that I polished everything off nicely.

Morgan VanDerLeest: So we're right around two weeks. Cool. I love that. That helps me be able to communicate better to whoever I need to communicate out to. And you're helping to build that muscle within the individual, which will benefit them in their career. That'll benefit your team and being able to properly talk about these things.

Morgan VanDerLeest: And that'll hopefully set them up to be able to one day, either grow into a phenomenal senior, staff, manager, whichever direction they decide to go and be able to say, I know why I need this time. I'm going to help you do this now.

Eddie Flaisler: I could not have phrased it better.

Outro
---

Morgan VanDerLeest: All right, y'all, if you enjoyed this. Don't forget to share and subscribe on your podcast player of choice. What tough challenge are you facing as an engineering leader? We'd love to hear from you at: peopledrivendevelopment@gmail.com. Until next time, see y'all.

Eddie Flaisler: Cheers.

Creators and Guests

Eddie Flaisler
Host
Eddie Flaisler
Eddie is a classically-trained computer scientist born in Romania and raised in Israel. His experience ranges from implementing security systems to scaling up simulation infrastructure for Uber’s autonomous vehicles, and his passion lies in building strong teams and fostering a healthy engineering culture.
Morgan VanDerLeest
Host
Morgan VanDerLeest
Trying to make software engineering + leadership a better place to work. Dad. Book nerd. Pleasant human being.
How predictable can software engineering be?
Broadcast by