How It's Tested | Ep. #9, Leading Engineering Teams with Dave Lewis of Mobot
Listen to How It's Tested Ep. #9, Leading Engineering Teams with Dave Lewis of Mobot.
Eden Full-Goh: Today I'm speaking with Dave Lewis, Mobot's very own head of engineering. Hi, Dave. Thanks so much for joining me on the podcast today.
Dave Lewis: Of course. Happy to be here, Eden.
Eden: I wanted to explain a little bit to our audience why are we doing this special guest episode with you? Well, we've gotten some questions from some of our listeners. They want to know a little bit more about our team here at Mobot, what we do, especially because everyone knows that we build fun robots that do cool stuff.
So I am actually going to be doing a separate founding story episode about the early days of Mobot with Jereme Corrado, our CTO. But in the meantime, thought it would be great to also get your perspective, Dave, as you're our head of engineering and you oversee a lot of interesting projects and initiatives here at our company.
Also, you have a very interesting background where you've worked at a number of large companies and small, small startups, very early stage startups at around the same stage as Mobot, even earlier. You've seen a lot of different stories, good and bad, ups and downs that I think really gives you a unique perspective on what makes a good engineering team, what makes good best practices.
What is your philosophy on testing? I'd love to cover a lot of that today in this episode. But maybe to kick us off, Dave, would love for you to give maybe an introduction about yourself and what you did before joining Mobot and maybe a little bit about your career so far?
Dave: Sure. I kind of fell into the industry because I thought it was interesting, I was in the right place at the right time as the web was taking off in the 90s. It didn't really require qualifications beyond knowing what the web was to get a job in the field. I had done coding in college so I knew how to program, but I hadn't been a computer science major so I didn't think of myself as a software engineer.
In fact, I interviewed for a testing internship at Microsoft my senior year, and at that time it was definitely an interview about how would you test things manually? I was asked, "How would you test a new toaster?" Really the point was can you think about the different ways that things fail? So then the first couple companies that I worked at, very, very early stage companies.
One was a famous e-sports website, another was based on audio video search technology. Very different from each other, technology-wise and market-wise. But they were both small engineering teams, early building product, and I would say our testing strategies for both of those were about what you would expect. Especially 20 years ago, in that before a release there was a list of things that someone would go and test.
It was not necessarily an engineer's job to do the testing, it was kind of this handover of like, "Okay. The build is ready to go out, so go through all these test cases." It was the kind of situation where we were deploying once a week, so like, "Ah, it's build Wednesday. Almost nothing happens today because all we do is build and test and fix and build and test and fix, and try to get things out of production."
That was kind of the world as I saw it and understood it at that point. I knew how to think about edge cases, but the idea of testing as an automated part of really the software product was not on my radar.
It was around that time that I first started to see at least within the Java community, what I think of as Test NG. Just the very beginnings in my exposure to unit tests, that wasn't a thing that I had ever learned anywhere so it was very much starting from scratch. What is a unit test? There was lots of conversation out there about definitions between unit tests and integration tests, and none of that mattered to me at that point.
It was just, "Oh, you could have a program that tests stuff." And, "Oh, wow. You could run that all that time, that's really interesting." That was just such a new idea to me. Then as the industry evolved and then terms like test driven development soon reached the software development mainstream. You even had what I think of as the small, scrappy startups.
It was really even described as a way to be more effective, more productive, not a nice to have or an indulgence that a big company would have lots of test coverage, but in fact a faster, better way for small teams to develop. This was in the 2000s and I think that really changed the way I thought about reliability. Just the role of automation in computers beyond the project itself, but handling lots of things that were otherwise manual.
I think other tools really improved in their frameworks and just the general practices of folks. It's so easy now to add tests to your code. There's so much more to the idea that you can use tests as a definition of spec, as a way of communicating expected behavior. It's just really interesting.
Now I get to be at Mobot where testing is what we do, and one of the things I really enjoy beyond the fact that there are robots at Mobot, is this challenge of representing the real world embeddedness of devices and interactions, seeing how different companies, different solutions are using that hardware, the sensors, the location, whatever it is to do something that's totally, totally new.
Eden: Yeah. Thanks, Dave. That's a super helpful overview of your career so far and also what lead you to your role at Mobot today. What's really interesting to me is I remember a couple of years ago when you and I first met, you were leading engineering at a seed stage startup and previously to this you've worked at companies like Envision and Birch Box and really big names.
So you've seen engineering teams big and small, do you feel like there's a certain threshold that a team, or a readiness that an engineering team has to hit before it makes sense to invest in testing or good DevOps practices or platform engineering? Or just even reorienting a team? What are some of the phases or milestones that you feel like an engineering team has to go through in terms of their journey together?
Dave: I think one of the exciting things about the way technology has evolved is that now with a one person company you would have great testing coverage, you would expect a lot of these pieces. There's not this sense of, "Oh, we don't need to test until we have five people." It's the way good software is created now, and there are so many different benefits. Especially because the maintenance becomes more lightweight.
When having tests takes you twice as long as not having tests, you can talk yourself out of doing tests. But if it is just a little bit of overhead to keep things up to date, or even better, if it's a way that let's you feel more comfortable with the code that you're writing or helps guide you to an implementation or a solution. I think 20 years ago you would've said, "When do we hire a QA tester?" Maybe if we had a whole team of 10 engineers you might say, but then you're still trying to figure out, "Well, would I rather hire one engineer or one QA tester? Can we just get by?"
There's a lot of that conversation because it still meant someone manually testing, and it's very easy for there to be disconnect between a manual tester and the product, and certainly the way the engineering team thinks about the product. It's very common, I would say even still today, that if you have a manual QA team that the first reaction of an engineering team when a bug is reported is like, "No, that's not really true. Probably there's some miscommunication or misunderstanding."
It takes a while to validate that there was a bug, and sometimes it is the case that the test case is written in a different perspective and so something wasn't done or the behavior did do what it was supposed to, but even though it was confusing. So I think in that gap, I think it's still interesting that even at Mobot as we're doing, we see that with our customers, that gap. That skepticism.
Actually, let me put it the other way, confidence in your own application, like, "I've written it, I know it works, I don't believe that these things are a failure because I wrote them to succeed. I've seen them work, I've tested it, I did something, I had unit tests, we have automated tests." Something that I think was a big transition five to ten years ago was how do we start having QA engineers, not manual QAs, how do we start thinking about automation and not just manual exploration of our testing?
Because we had automated unit tests, but that's obviously limited in what that can test. It's not testing interactions with your payment gateways, or integrations with other systems or back office at your own company or whatever they are. So the evolution of how can we run these tests in an automated fashion so that we're not waiting for someone? Oh, we have to get on their schedule, or wait for them to finish before it's ready.
Instead it becomes something that, yeah, you can you say we have the QA embedded in the engineering team, and that can work, but it takes a lot of work to make that work and it takes the right match to make that work. Whereas, obviously if we've decided that actually testing correctness is an engineering problem, much like as you said, DevOps.
DevOps is a way of saying, "No, there's not this other operations team that sets stuff up." It's now something that is part of our engineering process. It's not an other group that is responsible, we are the ones responsible. It definitely potentially expands what you need your engineers to know or be conversant in. But also it let's smaller teams be much more effective to do more things and it really keeps the integration and the vision.
It's much more of a holistic production because if you've had integration tests that have run and run, and then they break, well, it broke. There's no way around it. So you can have bad tests where you know that they're flaky, and that's its own challenge. That's kind of similar to when you can have an external group testing, but if you have tests you trust because they pass all the time and then they stop passing, obviously something has changed.
It may be that the test is what's wrong. But it's because some behavior has changed, so it probably tells you, "Oh, that's right. I don't know why that broke."That may also represent a disconnect somewhere else in the code that figures that identification and it really keeps that responsibility on the people who are making the changes and are closest to it, which I think the healthiest, the most effective part.
I mean, you know. I think of it as similar to DevOps, where instead of, "Oh, I can't do this work because I need new instances spun up or new things configured." It's like, no, that's part of what you're doing with your job, you need to make sure that as this goes to production it's going to get stood up in the proper way. I think that the move to more and more of a holistic view of the responsibility and the shared responsibility, to me I've seen it, it has increased in some places the stress or the amount of things that the engineer is thinking about.
The trade off though is that there's no blocking by other teams, responsibility is very clear, it lets the engineering team themselves really allocate time, effort, importance on these different pieces in an internal way that's balanced against their own work. Instead of, "Oh, we're trying to argue for budget for two more QAs, and that's an external thing."
Instead, it's, "We have our own time, our own whatever. Are we prioritizing testing coverage? Are we prioritizing passing tests? That's on us to figure out because we're the ones then who are paying the price if we don't do that because we're the ones who clearly allowed something to out broken." That kind of thing.
Eden: Yeah. That makes a lot of sense. Switching gears for a little bit, I want to talk about your current role and the work you do at Mobot. I think a common question or reaction that some of our audience members might be curious is I think most of your career you've been working on engineering teams that were very focused on a SaaS or a software only solution that was delivered to market.
What's special about Mobot is, of course, yeah, we have literal mechanical robots. You have to coordinate not only the infrastructure of setting all of that up, manufacturing it, installing it, office stuff of course, the firmware that runs on the robot. We have our own mobile app that we use to calibrate the phones that actually sit on the test bed.
Then there's of course our SaaS, our web interface where we get all the reports. What is it actually like to be an engineering leader at a company that has all of these other facets of engineering that you have to interface with? Besides your vanilla web app or mobile app that I think a lot of engineering leaders are normally tasked to build out.
Dave: Yeah. I think the surface are of the team, as you said, is so much broader because, for example, if you're using an AWS product as your messaging queue or something, you're all consumed with that. You can figure it out, what's going on, but it's all at that abstraction layer. Whereas for us, it's no the messaging queue. For our things it's like, "Oh, this isn't working."
What we're seeing is software not working, but that doesn't mean software not working. It could easily certainly be something at a hardware level which means it could be mechanical, it could be a failure in the board itself, it could be something happening in the physical environment there.
One of the interesting things about our fleet of devices is that we have to pay attention to charging cycles and battery swelling. You can have what seems to be a perfectly reasonable phone that works and it works, and then suddenly it's not registering taps the right way. Then you look at it, it's like, "Oh well, the battery has started to swell. It doesn't fit flat like we expect it to."
That's not something that if you're writing React code, you're not worried about something swelling and drying it off. You might be interacting with browsers or things, but it's all very circumscribed in the digital world. It means that communication on the team is extremely important because full stack for us, I was talking about this with someone else on the team, is if you say full stack developer in the greater world, the SaaS world that usually means, "I can do frontend and I can do database and backend services."
For us, full stack means I can do React, I can do database, I can do firmware, I can do C, I can understand the mechanical interaction between the gears as they move around and think about how fast can our gantry move. Because that's so much deeper, to be that full stack, the number of people who could keep building and have in fact is approximately zero.
So it really requires more communication, collaboration. It means there's lots of opportunity to learn things for everybody, whether you're starting at the front and learning your way down, starting at the back and learning your way up. I really like the way that it requires folks to interact and trust each other and interact with each other, and there's nobody who can go off and do something on their own.
It's really interesting in terms of the interactions between more and less experience folks, because it's easier for a less experienced person overall to still be an expert in the part of the stack that the more experienced person is not. And so I really enjoy that relationship and trade off of expertise and the dynamic that that creates.
I would say that's one of the defining culture aspects of our team, is that interaction with each other and the expectation of collaboration, and understanding that there's no one who can hero the thing on their own. We have to talk to each other to understand the other outlying pieces of where an actual challenge might be, or how to implement something because not everything is only implemented on the firmware or in React.
For example, there's so many different ways to interact with your phone, you can tap, you can drag, you can pinch and all those things. For adding a new action, that's everything. We have to figure out how to do the action with the end affector, we have to have a user experience that let's you interact with that and we have to bring those things together.
So yeah, I really enjoy that aspect of our team culture that comes out of the incredible depth of the stack. It does mean that there are less simple things than maybe there are in other opportunities. But it has a really nice impact on the way the team operates, for sure.
Eden: When you first joined the company, do you feel like there was a lot that you had to ramp up on in terms of mechanical engineering or robotics, in order to lead the team or be able to provide folks and coach engineers? Provide folks with advice? Was there a lot of learning that you had to do to ramp up or is it that you feel like a lot of what you've learned working at other companies actually translates over, even in a software only world?
Dave: I think both are true. I'm lucky enough to have some robotics in my background, so it wasn't brand new, the idea of hardware, of interacting with the physical world. It's not my specialty, but I had at least enough background that I wasn't starting from absolute scratch. But also I very strongly believe that the way that engineering teams operate, there's a lot of commonalities across companies.
That was certainly one thing that I talked about to you as we were first meeting, is, "Here's what I've seen be successful. Here's what I think applies no matter what product you're building." The different requirements of communication, the prioritization of collaboration across the company, all these sorts of pieces. The ways that you give engineers opportunities to grow, how they can interact with each other, how we keep track of what we're doing, a lot of that is at least semi generalizable.
It's not perfect generalizable, which is why this is an interesting job and there's not just some SaaS product that you use to run your engineering team. But a lot of it does cross-apply, and really to me it's very much like an engineer taking a new job and saying, "I worked with databases before, I haven't worked with this database but I know how to think about indexes and how to make it scale, and I'm going to do that here."
So I think of it in the same way of coming into new organizations and saying, "Okay, here's some things that I've seen before. What are the techniques that work? What are the technologies, as an analogy, that I've experienced in the past? What's different here?" That's super important, certainly when I started the first month or two, a lot of listening, very picky about places where I'm going to have a strong opinion or advocate for change because Mobot was already a successful company.
This wasn't a situation of, "Nothing is working, nobody likes each other. Maybe you have to really redo lots of it." No, it was a growth opportunity, right? We're growing. We know that some of the things that we're doing need to transition in scale, but there's also this history of how we've been doing things and some of the people are here because of that.
I've been in circumstances where the new person comes in and is like, "I know how things work. That's why I'm here. We're going to do this, I know you've done that. We're going to do it this other way." Certainly having been managed that way, that was not very enjoyable and it's a high priority for me coming into a new situation to make sure that I'm affirming and understanding the things that were working and are working.
My job is to keep the team, not in continuous improvement, but not tear things down. Especially in areas that are core to the culture of the team, or to the success of the company.
Eden: Yeah. I think that I can definitely see. A lot of what you believe has actually manifested, and I have the privilege of working with you basically every day and I see how your values really do shine through there. I think one other thing that has been special is, yeah, you found a way to build a lot of new initiatives, build upon the culture that we have, but in a way that's not... We didn't have to throw anything else out, exactly like you said.
We already had a lot of great things, but there was just a lot of growth that we needed to go through in the last year and I think before you joined the company we had a core platform that we were starting to understand, that we had customers that were using Mobot, we had customers that were running tests. We were starting to identify types of test cases that were going to be interesting for automation that would be useful to the market.
But I think, yeah, what's been really cool is you've been helping to spearhead some initiatives at our company of now that we know that those are the kinds of things that people want to automate testing for, engineers want a way to be able to control a robot themselves. They want to be able to craft their own tests and they want to be able to see the analytics, see the reports, and then make decisions and triage things.
I guess as a segue, I would love to... Yeah, maybe you can explain a little bit more of what are you currently working on? What's the future of Mobot? Yeah, what's makes you excited about what we're building?
Dave: Yeah. I think right now for customers, Mobot is a testing platform that they interact with at a test case or a client level, a client-customer level. They're not interacting with the devices directly, and for some teams, obviously that's ideal for them. They're really trying not to internalize some of this process, which means for some customers the product that we're building is QA as a service.
There are other teams where we're a little more embedded with their development group and they don't want QA as a service, they want a partner, they want tools to augment the team that they currently have. They're having challenges, testing across lots of manual devices. Especially if you're a young company, you don't want to go out and buy 200 phones to make sure your app works across all of the different OS versions and hardware versions, but you know that it's important.
You want to be able to test things on hardware every week without distracting and disrupting the team and process that you already have, and so we're able to come in in those situations and act as a teammate. I think what we've been able to do is having learned from both of those kinds of customers, to figure out how can we allow customers to have more control.
I think that's one of the things about the platform right now, is a customer controls what we test but then it's at that level. They aren't controlling the devices directly. They can go and read the reports about what testing happened, but if they wanted to say, "I think we fixed it." They have to schedule testing with us again. They can't just go test it themselves in the same environment.
Obviously they can test it locally, but presumably locally they already tested it and it worked. The situation is that when it's in the real world, it wasn't working. So we've learned how our operators interact with the phones and devices, what things they need and what are the interaction patterns that come up. We're building the exposure of that in a software way to customers.
We have lots of videos on the website of robots in motion, whenever we talk to folks as you said, with questions from the audience already, interested in how the bots work. "Oh, look at the bots." That's always really interesting. It's cool because someone is sitting next to the bot and they're interacting with it. What we're building now is going to allow anyone whether it's a customer in the same city as us, one of our operators on the other coast, a customer somewhere else in the world, to be directly interacting with a robot in our bot lab, in our data center. It's unreal.
You get used to seeing the bots move as people are next to them. Now we have racks of bots where they're moving because someone else, somewhere else is doing something and it's wild. It really feels magical to see testing happening that you don't have to be at your phone tapping. You can be somewhere else and be directly interacting very soon. People will be able to run their own automated tests on our devices.
They can run it on their schedule. It gives us a much broader spectrum of offerings to customers. If you're a nimble engineering team, and device coverage is your struggle, or just generally finding a way to test hardware on a reliable cadence is difficult. Your budget is small, or things are very well circumscribed, now you could interact and build your test case, you can run them in an automated fashion yourself.
It's approaching where you can start to integrate that into your newest continuous integration, continuous deployment platforms. Obviously hardware is slower than simulators, so I think that's still a place where we're experimenting. But we already have a lot of interactions with customers where we are, that last step in their build deploy is just disconnected and is manually handled.
Now they can build these web hooks into their own systems and say, "Okay. Now we're going to push our button, these tests are going to run at Mobot at midnight when our build is ready. Not just when Mobot has set things up for us." It's really exciting to see that coming to fruition, and to thinking this is still just the beginning of the platform because it doesn't take a ton of imagination to say, "Okay. Now anybody can interact with a device. We now have this infrastructure available in the way that AWS was really pioneering."
Okay. First you're running abstractions of hardware, and that's become even more and more abstract. We didn't even talk about the things we deploy to as hardware, since it started that way. Right now we're offering this, I can go run this on a phone, but where does that take us? How does that spread out? What are the kinds of things that you can do to really enable teams to build more reliable software?
Usually the problems that teams are really excited to work on are approving their own product, not spending a week chasing some arbitrary bug in a corner case. Instead we're able to build the tools that let computers do the harder work, let robots do the hard work.
It's super interesting, it's really exciting to see the delight on people's faces as they realize that they're the ones interacting with the phone. The videos they've seen of other people that they're paying Mobot to do stuff, that are now, "Oh, I told this to do that. Oh, it's running my thing on its own." It's super exciting and it's really fun to see the way that it's just going to enable more and more workflows and companies too.
Probably everyone listening to this podcast has a phone. Hopefully we're helping everyone have better phone experiences with the apps that are important to your life, your bank, your medical services, what have you. I'm sure folks out there have had good and bad experiences with those kinds of applications. If we have a chance to improve that, that's super exciting.
Eden: Yeah. I think some additional context for folks is where we are at right now with Mobot is all the way up until this point for the last three, four years, we've been building a platform where essentially we have internal tools that allow our team, our operators to program the robot and we've been dog fooding that tool. It's a no code interface where essentially you click on the screen where you want the robot to tap and the robot will move accordingly.
We've been dog fooding that platform for the last few years with a number of different customers, and using that to essentially deliver QA as a service. You were mentioning operators have to be within a reasonable physical proximity of the robot in case something needs to get adjusted or recalibrated or get the robot back on track.
But we are now at a point where we have a video feed, a camera feed of the robot so that you don't have to be next to the robot. You could be, yeah, like you said, halfway around the world and be able to intervene and just observe a robot and get it back on track. If that is something that we're opening up to our team so that they could operate a robot remotely, then it is something that we can also open up to customers as well which is very compelling.
I think it gets us closer to that dream of it feels very much like other tools on the market in testing that folks are familiar with, but gives you the additional fidelity and rigor of physical, real world testing to cover all those edge cases that it's just not possible to cover with software.
Dave: Yeah.
Eden: Thank you so much, Dave. I really enjoyed this conversations and also diving in a little bit deeper with you. I think a lot of folks are just very curious with the world of physical robotics and hardware, mechanical engineering, that we have the privilege of dealing with at our company. I think folks are kind of always interested in getting that insider perspective, so thanks for joining me on this episode.
Dave: Thanks, Eden.