In this episode of the "Giant Robots Smashing Into Other Giant Robots" podcast, host Victoria Guido delves into the intersection of technology, product development, and personal passions with her guests Henry Yin, Co-Founder and CTO of Merico, and Maxim Wheatley, the company's first employee and Community Leader. They are joined by Joe Ferris, CTO of thoughtbot, as a special guest co-host. The conversation begins with a casual exchange about rock climbing, revealing that both Henry and Victoria share this hobby, which provides a unique perspective on their professional roles in software development.
Throughout the podcast, Henry and Maxim discuss the journey and evolution of Merico, a company specializing in data-driven tools for developers. They explore the early stages of Merico, highlighting the challenges and surprises encountered while seeking product-market fit and the strategic pivot from focusing on open-source funding allocation to developing a comprehensive engineering metric platform. This shift in focus led to the creation of Apache DevLake, an open-source project contributed to by Merico and later donated to the Apache Software Foundation, reflecting the company's commitment to transparency and community-driven development.
The episode also touches on future challenges and opportunities in the field of software engineering, particularly the integration of AI and machine learning tools in the development process. Henry and Maxim emphasize the potential of AI to enhance developer productivity and the importance of data-driven insights in improving team collaboration and software delivery performance. Joe contributes to the discussion with his own experiences and perspectives, particularly on the importance of process over individual metrics in team management.
- Merico
- Follow Merico on GitHub, Linkedin, or X.
- Apache DevLake
- Follow Henry Yin on LinkedIn.
- Follow Maxim Wheatley on LinkedIn or X.
- Follow thoughtbot on X or LinkedIn.
Become a Sponsor of Giant Robots!
Transcript:
VICTORIA: This is the Giant Robots Smashing Into Other Giant Robots podcast, where we explore the design, development, and business of great products. I'm your host, Victoria Guido. And with me today is Henry Yin, Co-Founder and CTO of Merico, and Maxim Wheatley, the first employee and Community Leader of Merico, creating data-driven developer tools for forward-thinking devs. Thank you for joining us.
HENRY: Thanks for having us.
MAXIM: Glad to be here, Victoria. Thank you.
VICTORIA: And we also have a special guest co-host today, the CTO of thoughtbot, Joe Ferris.
JOE: Hello.
VICTORIA: Okay. All right. So, I met Henry and Maxim at the 7CTOs Conference in San Diego back in November. And I understand that Henry, you are also an avid rock climber.
HENRY: Yes. I know you were also in Vegas during Thanksgiving. And I sort of have [inaudible 00:49] of a tradition to go to Vegas every Thanksgiving to Red Rock National Park. Yeah, I'd love to know more about how was your trip to Vegas this Thanksgiving.
VICTORIA: Yes. I got to go to Vegas as well. We had a bit of rain, actually. So, we try not to climb on sandstone after the rain and ended up doing some sport climbing on limestone around the Blue Diamond Valley area; a little bit light on climbing for me, actually, but still beautiful out there. I loved being in Red Rock Canyon outside of Las Vegas.
And I do find that there's just a lot of developers and engineers who have an affinity for climbing. I'm not sure what exactly that connection is. But I know, Joe, you also have a little bit of climbing and mountaineering experience, right?
JOE: Yeah. I used to climb a good deal. I actually went climbing for the first time in, like, three years this past weekend, and it was truly pathetic. But you have to [laughs] start somewhere.
VICTORIA: That's right. And, Henry, how long have you been climbing for?
HENRY: For about five years. I like to spend my time in nature when I'm not working: hiking, climbing, skiing, scuba diving, all of the good outdoor activities.
VICTORIA: That's great. And I understand you were bouldering in Vegas, right? Did you go to Kraft Boulders?
HENRY: Yeah, we went to Kraft also Red Spring. It was a surprise for me. I was able to upgrade my outdoor bouldering grade to B7 this year at Red Spring and Monkey Wrench. There was always some surprises for me. When I went to Red Rock National Park last year, I met Alex Honnold there who was shooting a documentary, and he was really, really friendly. So, really enjoying every Thanksgiving trip to Vegas.
VICTORIA: That's awesome. Yeah, well, congratulations on B7. That's great. It's always good to get a new grade. And I'm kind of in the same boat with Joe, where I'm just constantly restarting my climbing career. So [laughs], I haven't had a chance to push a grade like that in a little while. But that sounds like a lot of fun.
HENRY: Yeah, it's really hard to be consistent on climbing when you have, like, a full-time job, and then there's so much going on in life. It's always a challenge.
VICTORIA: Yeah. But a great way to like, connect with other people, and make friends, and spend time outdoors. So, I still really appreciate it, even if I'm not maybe progressing as much as I could be. That's wonderful. So, tell me, how did you and Maxim actually meet? Did you meet through climbing or the outdoors?
MAXIM: We actually met through AngelList, which I really recommend to anyone who's really looking to get into startups. When Henry and I met, Merico was essentially just starting. I had this eagerness to explore something really early stage where I'd get to do all of the interesting kind of cross-functional things that come with that territory, touching on product and marketing, on fundraising, kind of being a bit of everything. And I was eager to look into something that was applying, you know, machine learning, data analytics in some really practical way.
And I came across what Hezheng Henry and the team were doing in terms of just extracting useful insights from codebases. And we ended up connecting really well. And I think the previous experience I had was a good fit for the team, and the rest was history. And we've had a great time building together for the last five years.
VICTORIA: Yeah. And tell me a little bit more about your background and what you've been bringing to the Merico team.
MAXIM: I think, like a lot of people in startups, consider myself a member of the Island of Misfit Toys in the sense that no kind of clear-cut linear pathway through my journey but a really exciting and productive one nonetheless. So, I began studying neuroscience at Georgetown University in Washington, D.C. I was about to go to medical school and, in my high school years had explored entrepreneurship in a really basic way. I think, like many people do, finding ways to monetize my hobbies and really kind of getting infected with that bug that I could create something, make money from it, and kind of be the master of my own destiny, for lack of less cliché terms.
So, not long after graduating, I started my first job that recruited me into a seed-stage venture capital, and from there, I had the opportunity to help early-stage startups, invest in them. I was managing a startup accelerator out there. From there, produced a documentary that followed those startups. Not long after all of that, I ended up co-founding a consumer electronics company where I was leading product, so doing lots of mechanical, electrical, and a bit of software engineering.
And without taking too long, those were certainly kind of two of the more formative things. But one way or another, I've spent my whole career now in startups and, especially early-stage ones. It was something I was eager to do was kind of take some of the high-level abstract science that I had learned in my undergraduate and kind of apply some of those frameworks to some of the things that I do today.
VICTORIA: That's super interesting. And now I'm curious about you, Henry, and your background. And what led you to get the idea for Merico?
HENRY: Yeah. My professional career is actually much simpler because Merico was my first company and my first job. Before Merico, I was a PhD student at UC Berkeley studying computer science. My research was an intersection of software engineering and machine learning. And back then, we were tackling this research problem of how do we fairly measure the developer contributions in a software project?
And the reason we are interested in this project has to do with the open-source funding problem. So, let's say an open-source project gets 100k donations from Google. How does the maintainers can automatically distribute all of the donations to sometimes hundreds or thousands of contributors according to their varying level of contributions? So, that was the problem we were interested in. We did research on this for about a year. We published a paper. And later on, you know, we started the company with my, you know, co-authors. And that's how the story began for Merico.
VICTORIA: I really love that. And maybe you could tell me just a little bit more about what Merico is and why a company may be interested in trying out your services.
HENRY: The product we're currently offering actually is a little bit different from what we set out to build. At the very beginning, we were building this platform for open-source funding problem that we can give an open-source project. We can automatically, using algorithm, measure developer contributions and automatically distribute donations to all developers. But then we encountered some technical and business challenges.
So, we took out the metrics component from the previous idea and launched this new product in the engineering metric space. And this time, we focus on helping engineering leaders better understand the health of their engineering work. So, this is the Merico analytics platform that we're currently offering to software engineering teams.
JOE: It's interesting. I've seen some products that try to judge the health of a codebase, but it sounds like this is more trying to judge the health of the team.
MAXIM: Yeah, I think that's generally fair to say. As we've evolved, we've certainly liked to describe ourselves as, you know, I think a lot of people are familiar with observability tools, which help ultimately ascertain, like, the performance of the technology, right? Like, it's assessing, visualizing, chopping up the machine-generated data. And we thought there would be a tremendous amount of value in being, essentially, observability for the human-generated data.
And I think, ultimately, what we found on our journey is that there's a tremendous amount of frustration, especially in larger teams, not in looking to use a tool like that for any kind of, like, policing type thing, right? Like, no one's looking if they're doing it right, at least looking to figure out, like, oh, who's underperforming, or who do we need to yell at? But really trying to figure out, like, where are the strengths? Like, how can we improve our processes? How can we make sure we're delivering better software more reliably, more sustainably? Like how are we balancing that trade-off between new features, upgrades and managing tech debt and bugs?
We've ultimately just worked tirelessly to, hopefully, fill in those blind spots for people. And so far, I'm pleased to say that the reception has been really positive. We've, I think, tapped into a somewhat subtle but nonetheless really important pain point for a lot of teams around the world.
VICTORIA: Yeah. And, Henry, you said that you started it based on some of the research that you did at UC Berkeley. I also understand you leaned on the research from the DevOps research from DORA. Can you tell me a little bit more about that and what you found insightful from the research that was out there and already existed?
MAXIM: So, I think what's really funny, and it really speaks to, I think, the importance in product development of just getting out there and speaking with your potential users or actual users, and despite all of the deep, deep research we had done on the topic of understanding engineering, we really hadn't touched on DORA too much. And this is probably going back about five years now.
Henry and I were taking a customer meeting with an engineering leader at Yahoo out in the Bay Area. He kind of revealed this to us basically where he's like, "Oh, you guys should really look at incorporating DORA into this thing. Like, all of the metrics, all of the analytics you're building super cool, super interesting, but DORA really has this great framework, and you guys should look into it."
And in hindsight, I think we can now [chuckles], honestly, admit to ourselves, even if it maybe was a bit embarrassing at the time where both Henry and I were like, "What? What is that? Like, what's Dora?" And we ended up looking into it and since then, have really become evangelists for the framework. And I'll pass it to Henry to talk about, like, what that journey has looked like.
HENRY: Thanks, Maxim. I think what's cool about DORA is in terms of using metrics, there's always this challenge called Goodhart's Law, right? So, whenever a metric becomes a target, the metric cease to be a good metric because people are going to find ways to game the metric. So, I think what's cool about DORA is that it actually offers not just one metric but four key metrics that bring balance to covering both the stability and velocity.
So, when you look at DORA metrics, you can't just optimize for velocity and sacrificing your stability. But you have to look at all four metrics at the same time, and that's harder to game. So, I think that's why it's become more and more popular in the industry as the starting point for using metrics for data-driven engineering.
VICTORIA: Yeah. And I like how DORA also represents it as the metrics and how they apply to where you are in the lifecycle of your product. So, I'm curious: with Merico, what kind of insights do you think engineering leaders can gain from having this data that will unlock some of their team's potential?
MAXIM: So, I think one of the most foundational things before we get into any detailed metrics is I think it's more important than ever, especially given that so many of us are remote, right? Where the general processes of software engineering are generally difficult to understand, right? They're nuanced. They tend to kind of happen in relative isolation until a PR is reviewed and merged. And it can be challenging, of course, to understand what's being done, how consistently, how well, like, where are the good parts, where are the bad parts. And I think that problem gets really exasperated, especially in a remote setting where no one is necessarily in the same place.
So, on a foundational level, I think we've really worked hard to solve that challenge, where just being able to see, like, how are we doing? And to that point, I think what we've found before anyone even dives too deep into all of the insights that we can deliver, I think there's a tremendous amount of appetite for anyone who's looking to get into that practice of constant improvement and figuring out how to level up the work they're doing, just setting close benchmarks, figuring out, like, okay, when we talk about more nebulous or maybe subjective terms like speed, or quality, what does good look like? What does consistent look like?
Being able to just tie those things to something that really kind of unifies the vocabulary is something I always like to say, where, okay, now, even if we're not focused on a specific metric, or we don't have a really particular goal in mind that we want to assess, now we're at least starting the conversation as a team from a place where when we talk about quality, we have something that's shared between us. We understand what we're referring to. And when we're talking about speed, we can also have something consistent to talk about there.
And within all of that, I think one of the most powerful things is it helps to really kind of ground the conversations around the trade-offs, right? There's always that common saying: the triangle of trade-offs is where it's, like, you can have it cheap; you can have it fast, and you can have it good, but you can only have two. And I think with DORA, with all of these different frameworks with many metrics, it helps to really solidify what those trade-offs look like. And that's, for me at least, been one of the most impactful things to watch: is our global users have really started evolving their practices with it.
HENRY: Yeah. And I want to add to Maxim's answer. But before that, I just want to quickly mention how our products are structured. So, Merico actually has an open-source component and a proprietary component. So, the open-source component is called Apache DevLake. It's an open-source project we created first within Merico and later on donated to Apache Software Foundation. And now, it's one of the most popular engineering metrics tool out there.
And then, on top of that, we built a SaaS offering called DevInsight Cloud, which is powered by Apache DevLake. So, with DevLake, the open-source project, you can set up your data connections, connect DevLake to all of the dev tools you're using, and then we collect data. And then we provide many different flavors of dashboards for our users.
And many of those dashboards are structured, and there are different questions engineering teams might want to ask. For example, like, how fast are we responding to our customer requirement? For that question, we will look at like, metrics like change lead time, or, like, for a question, how accurate is our planning for the sprint? In that case, the dashboard will show metrics relating to the percentage of issues we can deliver for every sprint for our plan. So, that's sort of, you know, based on the questions that the team wants to answer, we provide different dashboards that help them extract insights using the data from their DevOps tools.
JOE: It's really interesting you donated it to Apache. And I feel like the hybrid SaaS open-source model is really common. And I've become more and more skeptical of it over the years as companies start out open source, and then once they start getting competitors, they change the license. But by donating it to Apache, you sort of sidestep that potential trust issue.
MAXIM: Yeah, you've hit the nail on the head with that one because, in many ways, for us, engaging with Apache in the way that we have was, I think, ultimately born out of the observations we had about the shortcomings of other products in the space where, for one, very practical. We realized quickly that if we wanted to offer the most complete visibility possible, it would require connections to so many different products, right?
I think anyone can look at their engineering toolchain and identify perhaps 7, 9, 10 different things they're using on a day-to-day basis. Oftentimes, those aren't shared between companies, too. So, I think part one was just figuring out like, okay, how do we build a framework that makes it easy for developers to build a plugin and contribute to the project if there's something they want to incorporate that isn't already supported? And I think that was kind of part one.
Part two is, I think, much more important and far more profound, which is developer trust, right? Where we saw so many different products out there that claimed to deliver these insights but really had this kind of black-box approach, right? Where data goes in, something happens, insights come out. How's it doing that? How's it weighting things? What's it calculating? What variables are incorporated? All of that is a mystery. And that really leads to developers, rightfully, not having a basis to trust what's actually being shown to them.
So, for us, it was this perspective of what's the maximum amount of transparency that we could possibly offer? Well, open source is probably the best answer to that question. We made sure the entirety of the codebase is something they can take a look at, they can modify. They can dive into the underlying queries and algorithms and how everything is working to gain a total sense of trust in how is this thing working? And if I need to modify something to account for some nuanced details of how our team works, we can also do that.
And to your point, you know, I think it's definitely something I would agree with that one of the worst things we see in the open-source community is that companies will be kind of open source in name only, right? Where it's really more of marketing or kind of sales thing than anything, where it's like, oh, let's tap into the good faith of open source. But really, somehow or another, through bait and switch, through partial open source, through license changes, whatever it is, we're open source in name only but really, a proprietary, closed-source product.
So, for us, donating the core of DevLake to the Apache Foundation was essentially our way of really, like, putting, you know, walking the talk, right? Where no one can doubt at this point, like, oh, is this thing suddenly going to have the license changed? Is this suddenly going to go closed-source? Like, the answer to that now is a definitive no because it is now part of that ecosystem.
And I think with the aspirations we've had to build something that is not just a tool but, hopefully, long-term becomes, like, foundational technology, I think that gives people confidence and faith that this is something they can really invest in. They can really plumb into their processes in a deep and meaningful way with no concerns whatsoever that something is suddenly going to change that makes all of that work, you know, something that they didn't expect.
JOE: I think a lot of companies guard their source code like it's their secret sauce, but my experience has been more that it's the secret shame [laughs].
HENRY: [laughs]
MAXIM: There's no doubt in my role with, especially our open-source product driving our community we've really seen the magic of what a community-driven product can be. And open source, I think, is the most kind of a true expression of a community-driven product, where we have a Slack community with nearly 1,000 developers in it now. Naturally, right? Some of those developers are in there just to ask questions and answer questions. Some are intensely involved, right? They're suggesting improvements. They're suggesting new features. They're finding ways to refine things.
And it really is that, like, fantastic culture that I'm really proud that we've cultivated where best idea ships, right? If you've got a good idea, throw it into a GitHub issue or a comment. Let's see how the community responds to it. Let's see if someone wants to pick it up. Let's see if someone wants to submit a PR. If it's good, it goes into production, and then the entire community benefits. And, for me, that's something I've found endlessly exciting.
HENRY: Yeah. I think Joe made a really good point on the secret sauce part because I don't think the source code is our secret sauce. There's no rocket science in DevLake. If we break it down, it's really just some UI UX plus data pipelines. I think what's making DevLake successful is really the trust and collaboration that we're building with the open-source community. When it comes to trust, I think there are two aspects. First of all, trust on the metric accuracy, right? Because with a lot of proprietary software, you don't know how they are calculating the metrics. If people don't know how the metrics are calculated, they can't really trust it and use it.
And secondly, is the trust that they can always use this software, and there's no vendor lock-in. And when it comes to collaboration, we were seeing many of our data sources and dashboards they were contributed not by our core developers but by the community. And the communities really, you know, bring in their insights and their use cases into DevLake and make DevLake, you know, more successful and more applicable to more teams in different areas of soft engineering.
MID-ROLL AD:
Are you an entrepreneur or start-up founder looking to gain confidence in the way forward for your idea? At thoughtbot, we know you’re tight on time and investment, which is why we’ve created targeted 1-hour remote workshops to help you develop a concrete plan for your product’s next steps.
Over four interactive sessions, we work with you on research, product design sprint, critical path, and presentation prep so that you and your team are better equipped with the skills and knowledge for success.
Find out how we can help you move the needle at tbot.io/entrepreneurs.
VICTORIA: I understand you've taken some innovative approaches on using AI in your open-source repositories to respond to issues and questions from your developers. So, can you tell me a little bit more about that?
HENRY: Absolutely. I self-identify as a builder. And one characteristic of builder is to always chase after the dream of building infinite things within the finite lifespan. So, I was always thinking about how we can be more productive, how we can, you know, get better at getting better. And so, this year, you know, AI is huge, and there are so many AI-powered tools that can help us achieve more in terms of delivering software.
And then, internally, we had a hackathon, and there's one project, which is an AI-powered coding assistant coming out of it called DevChat. And we have made it public at devchat.ai. But we've been closely following, you know, what are the other AI-powered tools that can make, you know, software developers' or open-source maintainers' lives easier? And we've been observing that there are more and more open-source projects adopting AI chatbots to help them handle, you know, respond to GitHub issues.
So, I recently did a case study on a pretty popular open-source project called LangChain. So, it's the hot kid right now in the AI space right now. And it's using a chatbot called Dosu to help respond to issues. I had some interesting findings from the case study.
VICTORIA: In what ways was that chatbot really helpful, and in what ways did it not really work that well?
HENRY: Yeah, I was thinking of how to measure the effectiveness of that chatbot. And I realized that there is a feature that's built in GitHub, which is the reaction to comment. So, how the chatbot works is whenever there is a new issue, the chatbot would basically retrieval-augmented generation pipeline and then using ORM to generate a response to the issue. And then there's people leave reactions to that comment by the chatbot, but mostly, it's thumbs up and thumbs down.
So, what I did is I collect all of the issues from the LangChain repository and look at how many thumbs up and thumbs down Dosu chatbot got, you know, from all of the comments they left with the issues. So, what I found is that over across 2,600 issues that Dosu chatbot helped with, it got around 900 thumbs ups and 1,300 thumbs down. So, then it comes to how do we interpret this data, right? Because it got more thumbs down than thumbs up doesn't mean that it's actually not useful or harmful to the developers.
So, to answer that question, I actually looked at some examples of thumbs-up and thumb-down comments. And what I found is the thumb down doesn't mean that the chatbot is harmful. It's mostly the developers are signaling to the open-source maintainers that your chatbot is not helping in this case, and we need human intervention. But with the thumbs up, the chatbot is actually helping a lot.
There's one issue where people post a question, and the chatbot just wrote the code and then basically made a suggestion on how to resolve the issue. And the human response is, "Damn, it worked." And that was very surprising to me, and it made me consider, you know, adopting similar technology and AI-powered tools for our own open-source project.
VICTORIA: That's very cool. Well, I want to go back to the beginning of Merico. And when you first got started, and you were trying to understand your customers and what they need, was there anything surprising in that early discovery process that made you change your strategy?
HENRY: So, one challenge we faced when we first explored open-source funding allocation problem space is that our algorithm looks at the Git repository. But with software engineering, especially with open-source collaboration, there are so many activities that are happening outside of open-source repos on GitHub. For example, I might be an evangelist, and my day-to-day work might be, you know, engaging in community work, talking about the open-source project conference. And all of those things were not captured by our algorithm, which was only looking at the GitHub repository at the time. So, that was one of the technical challenge that we faced and led us to switch over to more of the system-driven metrics side.
VICTORIA: Gotcha. Over the years, how has Merico grown? What has changed between when you first started and today?
HENRY: So, one thing is the team size. When we just got started, we only have, you know, the three co-founders and Maxim. And now we have grown to a team of 70 team members, and we have a fully distributed team across multiple continents. So, that's pretty interesting dynamics to handle. And we learned a lot of how to build effective team and a cohesive team along the way.
And in terms of product, DevLake now, you know, has more than 900 developers in our Slack community, and we track over 360 companies using DevLake. So, definitely, went a long way since we started the journey. And yeah, tomorrow we...actually, Maxim and I are going to host our end-of-year Apache DevLake Community Meetup and featuring Nathen Harvey, the Google's DORA team lead. Yeah, definitely made some progress since we've been working on Merico for four years.
VICTORIA: Well, that's exciting. Well, say hi to Nathen for me. I helped takeover DevOps DC with some of the other organizers that he was running way back in the day, so [laughs] that's great. What challenges do you see on the horizon for Merico and DevLake?
MAXIM: One of the challenges I think about a lot, and I think it's front of mind for many people, especially with software engineering, but at this point, nearly every profession, is what does AI mean for everything we're doing? What does the future look like where developers are maybe producing the majority of their code through prompt-based approaches versus code-based approaches, right? How do we start thinking about how we coherently assess that?
Like, how do you maybe redefine what the value is when there's a scenario where perhaps all coders, you know, if we maybe fast forward a few years, like, what if the AI is so good that the code is essentially perfect? What does success look like then? How do you start thinking about what is a good team if everyone is shooting out 9 out of 10 PRs nearly every time because they're all using a unified framework supported by AI? So, I think that's certainly kind of one of the challenges I envision in the future.
I think, really, practically, too, many startups have been contending with the macroclimate within the fundraising climates. You know, I think many of the companies out there, us included, had better conditions in 2019, 2020 to raise funds at more favorable valuations, perhaps more relaxed terms, given the climate of the public markets and, you know, monetary policy. I think that's, obviously, we're all experiencing and has tightened things up like revenue expectations or now higher kind of expectations on getting into a highly profitable place or, you know, the benchmark is set a lot higher there.
So, I think it's not a challenge that's unique to us in any way at all. I think it's true for almost every company that's out there. It's now kind of thinking in a more disciplined way about how do you kind of meet the market demands without compromising on the product vision and without compromising on the roadmap and the strategies that you've put in place that are working but are maybe coming under a little bit more pressure, given kind of the new set of rules that have been laid out for all of us?
VICTORIA: Yeah, that is going to be a challenge. And do you see the company and the product solving some of those challenges in a unique way?
HENRY: I've been thinking about how AI can fulfill the promise of making developers 10x developer. I'm an early adopter and big fan of GitHub Copilot. I think it really helps with writing, like, the boilerplate code. But I think it's improving maybe my productivity by 20% to 30%. It's still pretty far away from 10x. So, I'm thinking how Merico's solutions can help fill the gap a little bit.
In terms of Apache DevLake and its SaaS offering, I think we are helping with, like, the team collaboration and measuring, like, software delivery performance, how can the team improve as a whole. And then, recently, we had a spin-off, which is the AI-powered coding assistant DevChat. And that's sort of more on the empowering individual developers with, like, testing, refactoring these common workflows.
And one big thing for us in the future is how we can combine these two components, you know, team collaboration and improvement tool, DevLake, with the individual coding assistant, DevChat, how they can be integrated together to empower developers. I think that's the big question for Merico ahead.
JOE: Have you used Merico to judge the contributions of AI to a project?
HENRY: [laughs] So, actually, after we pivot to engineering metrics, we focus now less on individual contribution because that sometimes can be counterproductive. Because whenever you visualize that, then people will sometimes become defensive and try to optimize for the metrics that measure individual contributions. So, we sort of...nowadays, we no longer offer that kind of metrics within DevLake, if that makes sense.
MAXIM: And that kind of goes back to one of Victoria's earlier questions about, like, what surprised us in the journey. Early on, we had this very benevolent perspective, you know, I would want to kind of underline that, that we never sought to be judging individuals in a negative way. We were looking to find ways to make it useful, even to a point of finding ways...like, we explored different ways to give developers badges and different kind of accomplishment milestones, like, things to kind of signal their strengths and accomplishments.
But I think what we've found in that journey is that...and I would really kind of say this strongly. I think the only way that metrics of any kind serve an organization is when they support a healthy culture. And to that end, what we found is that we always like to preach, like, it's processes, not people. It's figuring out if you're hiring correctly, if you're making smart decisions about who's on the team. I think you have to operate with a default assumption within reason that those people are doing their best work. They're trying to move the company forward. They're trying to make good decisions to better serve the customers, better serve the company and the product.
With that in mind, what you're really looking to do is figure out what is happening within the underlying processes that get something from thought to production. And how do you clear the way for people? And I think that's really been a big kind of, you know, almost like a tectonic shift for our company over the years is really kind of fully transitioning to that. And I think, in some ways, DORA has represented kind of almost, like, a best practice for, like, processes over people, right?
It's figuring out between quality and speed; how are you doing? Where are those trade-offs? And then, within the processes that account for those outcomes, how can you really be improving things? So, I would say, for us, that's, like, been kind of the number one thing there is figuring out, like, how do we keep doubling down on processes, not people? And how do we really make sure that we're not just telling people that we're on their side and we're taking a, you know, a very humanistic perspective on wanting to improve the lives of people but actually doing it with the product?
HENRY: But putting the challenge on measuring individual contributions aside, I'm as curious as Joe about AI's role in software engineering. I expect to see more and more involvement of AI and gradually, you know, replacing low-level and medium-level and, in the future, even high-level tasks for humans so we can just focus on, like, the objective instead of the implementation.
VICTORIA: I can imagine, especially if you're starting to integrate AI tools into your systems and if you're growing your company at scale, some of the ability to have a natural intuition about what's going on it really becomes a challenge, and the data that you can derive from some of these products could help you make better decisions and all different types of things.
So, I'm kind of curious to hear from Joe; with your history of open-source contribution and being a part of many different development teams, what kind of information do you wish that you had to help you make decisions in your role?
JOE: Yeah, that's an interesting question. I've used some tools that try to identify problem spots in the code. But it'd be interesting to see the results of tools that analyze problem spots in the process. Like, I'd like to learn more about how that works.
HENRY: I'm curious; one question for Joe. What is your favorite non-AI-powered code scanning tool that you find useful for yourself or for your team?
JOE: I think the most common static analysis tool I use is something to find the Git churn in a repository. Some of this probably is because I've worked mostly on projects these days with dynamic languages. So, there's kind of a limit to how much static analysis you can do of, you know, a Ruby or a Python codebase. But just by analyzing which parts of the application changed the most, help you find which parts are likely to be the buggiest and the most complex.
I think every application tends to involve some central model. Like, if you're making an e-commerce site, then probably products are going to have a lot of the core logic, purchases will have a lot of the core logic. And identifying those centers of gravity just through the Git statistics has helped me find places that need to be reworked.
HENRY: That's really interesting. Is it something like a hotspot analysis? And when you find a hotspot, then would you invest more resources in, like, refactoring the hotspot to make it more maintainable?
JOE: Right, exactly. Like, you can use the statistics to see which files you should look at. And then, usually, when you actually go into the files, especially if you look at some of the changes to the files, it's pretty clear that it's become, you know, for example, a class has become too large, something has become too tightly coupled.
HENRY: Gotcha.
VICTORIA: Yeah. And so, if you could go back in time, five years ago and give yourself some advice when you first started along this journey, what advice would you give yourself?
MAXIM: I'll answer the question in two ways: first for the company and then for myself personally. I think for the company, what I would say is, especially when you're in that kind of pre-product market fit space, and you're maybe struggling to figure out how to solve a challenge that really matters, I think you need to really think carefully about, like, how would you yourself be using your product? And if you're finding reasons, you wouldn't, like, really, really pay careful attention to those.
And I think, for us, like, early on in our journey, we ultimately kind of found ourselves asking, we're like, okay, we're a smaller earlier stage team. Perhaps, like, small improvements in productivity or quality aren't going to necessarily move the needle. That's one of the reasons maybe we're not using this. Maybe our developers are already at bandwidth. So, it's not a question of unlocking more bandwidth or figuring out where there's kind of weak points or bottlenecks at that level, but maybe how can we dial in our own processes to let the whole team function more effectively.
And I think, for us, like, the more we started thinking through that lens of, like, what's useful to us, like, what's solving a pain point for us, I think, in many ways, DevLake was born out of that exact thinking. And now DevLake is used by hundreds of companies around the world and has, you know, this near thousand developer community that supports it. And I think that's testament to the power of that.
For me, personally, if I were to kind of go back five years, you know, I'm grateful to say there isn't a whole lot I would necessarily change. But I think if there's anything that I would, it would just to be consistently more brave in sharing ideas, right? I think Merico has done a great job, and it's something I'm so proud of for us as a team of really embracing new ideas and really kind of making sure, like, best idea ships, right? There isn't a title. There isn't a level of seniority that determines whether or not someone has a right to suggest something or improve something.
And I think with that in mind, for me as a technical person but not a member of technical staff, so to speak, I think there was many occasions, for me personally, where I felt like, okay, maybe because of that, I shouldn't necessarily weigh in on certain things. And I think what I've found, and it's a trust-building thing as well, is, like, even if you're wrong, even if your suggestion may be misunderstands something or isn't quite on target, there's still a tremendous amount of value in just being able to share a perspective and share a recommendation and push it out there.
And I think with that in mind, like, it's something I would encourage myself and encourage everybody else in a healthy company to feel comfortable to just keep sharing because, ultimately, it's an accuracy-by-volume game to a certain degree, right? Where if I come up with one idea, then I've got one swing at the bat. But if us as a collective come up with 100 ideas that we consider intelligently, we've got a much higher chance of maybe a handful of those really pushing us forward. So, for me, that would be advice I would give myself and to anybody else.
HENRY: I'll follow the same structure, so I'll start by the advice in terms of company and advice to myself as an individual. So, for a company level, I think my advice would be fail fast because every company needs to go through this exploration phase trying to find their product-market fit, and then they will have to test, you know, a couple of ideas before they find the right fit for themselves, the same for us. And I wish that we actually had more in terms of structure in exploring these ideas and set deadlines, you know, set milestones for us to quickly test and filter out bad ideas and then accelerate the exploration process. So, fail fast would be my suggestion at the company level.
From an individual level, I would say it's more adapting to my CTO role because when I started the company, I still had that, you know, graduate student hustle mindset. I love writing code myself. And it's okay if I spent 100% of my time writing code when the company was, you know, at five people, right? But it's not okay [chuckles] when we have, you know, a team of 40 engineers. So, I wish I had that realization earlier, and I transitioned to a real CTO role earlier, focusing more, like, on technical evangelism or building out the technical and non-technical infrastructure to help my engineering teams be successful.
VICTORIA: Well, I really appreciate that. And is there anything else that you all would like to promote today?
HENRY: So if you're, you know, engineering leaders who are looking to measure, you know, some metrics and adopt a more data-driven approach to improving your software delivery performance, check out Apache DevLake. It's open-source project, free to use, and it has some great dashboards, support, various data resources. And join our community. We have a pretty vibrant community on Slack. And there are a lot of developers and engineering leaders discussing how they can get more value out of data and metrics and improve software delivery performance.
MAXIM: Yeah. And I think to add to that, something I think we've found consistently is there's plenty of data skeptics out there, rightfully so. I think a lot of analytics of every kind are really not very good, right? And so, I think people are rightfully frustrated or even traumatized by them.
And for the data skeptics out there, I would invite them to dive into the DevLake community and pose your challenges, right? If you think this stuff doesn't make sense or you have concerns about it, come join the conversation because I think that's really where the most productive discussions end up coming from is not from people mutually high-fiving each other for a successful implementation of DORA.
But the really exciting moments come from the people in the community who are challenging it and saying like, "You know what? Like, here's where I don't necessarily think something is useful or I think could be improved." And it's something that's not up to us as individuals to either bless or to deny. That's where the community gets really exciting is those discussions. So, I would say, if you're a data skeptic, come and dive in, and so long as you're respectful, challenge it. And by doing so, you'll hopefully not only help yourself but really help everybody, which is what I love about this stuff so much.
JOE: I'm curious, does Merico use Merico?
HENRY: Yes. We've been dogfooding ourself a lot. And a lot of the product improvement ideas actually come from our own dogfooding process. For example, there was one time that we look at a dashboard that has this issue change lead time. And then we found our issue, change lead time, you know, went up in the past few month. And then, we were trying to interpret whether that's a good thing or a bad thing because just looking at a single metric doesn't tell us the story behind the change in the metrics. So, we actually improved the dashboard to include some, you know, covariates of the metrics, some other related metrics to help explain the trend of the metric. So yeah, dogfooding is always useful in improving product.
VICTORIA: That's great. Well, thank you all so much for joining. I really enjoyed our conversation.
You can subscribe to the show and find notes along with a complete transcript for this episode at giantrobots.fm. If you have questions or comments, email us at hosts@giantrobots.fm. And you can find me on Twitter @victori_ousg.
This podcast is brought to you by thoughtbot and produced and edited by Mandy Moore.
Thanks for listening. See you next time.
Support Giant Robots Smashing Into Other Giant Robots