We are thrilled to announce the third session of our new Incubator Program. If you have a business idea that involves a web or mobile app, we encourage you to apply to our eight-week program. We'll help you validate your market opportunity, experiment with messaging and product ideas, and move forward with confidence toward an MVP.
Learn more and apply at tbot.io/incubator. We look forward to seeing your application in our inbox!
Peter Voss is the CEO and Chief Scientist of Aigo.ai, a groundbreaking alternative to conventional chatbots and generative models like ChatGPT. Aigo's chatbot is powered by Artificial General Intelligence (AGI), enabling it to think, learn, and reason much like a human being. It boasts short-term and long-term memory, setting it apart in terms of personalized service and context-awareness.
Along with host Chad Pytel, Peter talks about how most chatbots and AI systems today are basic. They can answer questions but can't understand or remember the context. Aigo.ai is different because it's built to think and learn more like humans. It can adapt and get better the more you use it. He also highlights the challenges Aigo.ai faces in securing venture capital, given that its innovative approach doesn't align with current investment models heavily focused on generative or deep learning AI. Peter and Chad agree that while generative AI serves certain functions well, the quest for a system that can think, learn, and reason like a human demands a fundamentally different approach.
- Follow Aigo.ai on LinkedIn or YouTube.
- Follow Peter Voss on LinkedIn. Visit his website: optimal.org/voss.html
- Follow thoughtbot on X or LinkedIn.
Become a Sponsor of Giant Robots!
CHAD: This is the Giant Robots Smashing Into Other Giant Robots Podcast, where we explore the design, development, and business of great products. I'm your host, Chad Pytel. And with me today is Peter Voss, CEO and Chief Scientist at Aigo.ai. Peter, thanks so much for joining me.
PETER: Yes, thank you.
CHAD: So, tell us a little bit about what Aigo.ai does. You've been working in AI for a long time. And it seems like Aigo is sort of the current culmination of a lot of your 15 years of work, so...
PETER: Yes, exactly. So, the quick way to describe our current product is a chatbot with a brain, and the important part is the brain. That basically, for the last 15-plus years, I've been working on the core technology for what's called AGI, Artificial General Intelligence, a system that can think, learn, reason similar to the way humans do. Now, we're not yet at human level with this technology. But it's a lot smarter and a lot more usable than traditional chatbots that don't have a brain.
CHAD: I want to dig into this idea a little bit. I think, like a lot of people, I've used just traditional chatbots, particularly like ChatGPT is the latest. I've built some things on top of it. What is the brain that makes it different? Especially if you've used one, what is using Aigo going to be different?
PETER: Right. I can give a concrete example of one of our customers then I can talk about the technology. So, one of our big customers is the 1-800-Flowers group of companies, which is Harry & David Popcorn Factory and several others. And wanted to provide a hyper-personalized concierge service for their customers where, you know, the system learns who you buy gifts for, for what occasions, you know, what your relationship is to them, and to basically remember who you are and what you want for each of their 20 million customers.
And they tried different technologies out there, you know, all the top brands and so on, and they just couldn't get it off the ground. And the reason is because they really don't learn. And we now have 89% self-service on the things that we've implemented, which is pretty much unheard of for complex conversations.
So, why can we do that? The reason is that our system has deep understanding. So, we have deep pausing, deep understanding, but more importantly, that the system remembers. It has short-term memory. It has long-term memory. And it uses that as context. So, you know, when you call back a second time, it'll remember what your previous call was, you know, what your preferences are, and so on. And it can basically use that information, the short and long-term memory, and reason about it. And that is really a step forward.
Now, until ChatGPT, which is really very different technology from chatbot technology, I mean, chatbot technology, you're assuming...the kind of thing we're talking about is really augmenting call center, you know, automatic call center calls. There, you need deep integration into the customers' back-end system. You obviously need to know what the latest product availability is, what the customers' outstanding orders are, you know, all sorts of things like, you know, delivery schedules. And we probably have, like, two dozen APIs that connect our system to their various corporate databases and so on.
Now, traditional chatbots obviously can do that. You hook up the APIs and do things, and it's, you know, it's a lot of work. But traditional chatbot technology really hasn't really changed much in 30 years. You basically have a categorizer; how can I help you? Basically, try to...what is the intent, intent categorizer? And then once your intent has been identified, you basically have a flowchart-type program that, you know, forces you down a flowchart. And that's what makes them so horrible because it doesn't use context. It doesn't have short-term memory.
CHAD: And I just wanted to clarify the product and where you mentioned call center. So, this isn't just...or only text-based chat. This is voice.
PETER: Yes. We started off with chat, and we now also have voice, so omnichannel. And the beauty of the system having the brain as well is you can jump from text messaging to a chat on the website to Apple ABC to voice, you know. So, you can basically move from one channel to another seamlessly. You know, so that's against traditional chatbot technology, which is really what everybody is still using.
Now, ChatGPT, of course, the fact that it's called ChatGPT sort of makes it a bit confusing. And, I mean, it's phenomenal. The technology is absolutely phenomenal in terms of what it can do, you know, write poems and give you ideas. And the amount of information it's amazing. However, it's really not suited for commercial-grade applications because it hallucinates and it doesn't have memory.
CHAD: You can give it some context, but it's basically faking it. You're providing it information every time you start to use it.
PETER: Correct. The next time you connect, that memory is gone, you know [crosstalk 05:58]
CHAD: Unless you build an application that saves it and then feeds it in again.
PETER: Right. Then you basically run out of context we know very quickly. In fact, I just published a white paper about how we can get to human-level AI. And one of the things we did and go over in the paper is we did a benchmark our technology where we fed the system about 300 or 400 facts, simple facts. You know, it might be my sister likes chocolate or, you know, it could be other things like I don't park my car in the garage or [chuckles], you know. It could be just simple facts, a few hundred of those. And then we asked questions about that.
Now, ChatGPT scored less than 1% on that because, you know, with an 8K window, it basically just couldn't remember any of this stuff. So, we use --
CHAD: It also doesn't, in my experience...it's basically answering the way it thinks the answer should sound or look. And so, it doesn't actually understand the facts that you give it.
CHAD: And so, if you feed it a bunch of things which are similar, it gets really confused because it doesn't actually understand the things. It might answer correctly, but it will, in my experience, just as likely answer incorrectly.
PETER: Yeah. So, it's extremely powerful technology for helping search as well if a company has all the documents and they...but the human always has to be in the loop. It just makes way too many mistakes. But it's very useful if it gives you information 8 out of 10 times and saves you a lot of time. And it's relatively easy to detect the other two times where it gives you wrong information. Now, I know in programming, sometimes, it's given me wrong information and ended up taking longer to debug the misinformation it gave me than it would have taken me. But overall, it's still a very, very powerful tool.
But it really isn't suitable for, you know, serious chatbot applications that are integrated into back-end system because these need to be signed off by...legal department needs to be happy that it's not going to get the company into trouble. Marketing department needs to sign off on it and customer experience, you know. And a generative system like that, you really can't rely on what it's going to say, and that's apart from security concerns and, you know, the lack of memory and deep understanding.
CHAD: Yeah. So, you mentioned generative AI, which is sort of one of the underlying pieces of ChatGPT. In your solutions, are you using any generative solutions?
PETER: No, not at all. Well, I can give one example. You know, what 1-800-Flowers do is they have an option to write a poem for your mother's birthday or Mother's Day or something like it. And for that, we will use ChatGPT, or they use ChatGPT for that because that's what it's good at. But, you know, that's really just any other app that you might call up to do something for you, you know, like calling up FedEx to find out where your goods are.
Apart from that, our technology...it's a good question you ask because, you know, statistical systems and generative AI now have really dominated the AI scene for the last about 12 years, really sort of since DeepMind started. Because it's been incredibly successful to take masses amounts of data and masses amounts of computing power and, you know, number crunch them and then be able to categorize and identify images and, you know, do all sorts of magical things.
But, the approach we use is cognitive AI as opposed to generative. It's a relatively unknown approach, but that's what we've been working on for 15 years. And it starts with the question of what does intelligence require to build a system so that it doesn't use masses amounts of data? It's not the quantity of data that counts. It's the quality of data.
And it's important that it can learn incrementally as you go along like humans do and that it can validate what it learns. It can reason about, you know, new information. Does this make sense? Do I need to ask a follow-up question? You know, that kind of thing. So, it's cognitive AI. That's the approach we're using.
CHAD: And, obviously, you have a product, and you've productized it. But you said, you know, we've been working on this, or you've been working on this model for a long time. How has it progressed?
PETER: Yes, we are now on, depending on how you count, but on the third major version of it that we've started. And really, the progress has been determined by resources really than any technology. You know, it's not that we sort of have a big R&D requirement. It's really more development. But we are a relatively small company. And because we're using such different technology, it's actually been pretty hard to raise VC money. You know, they look at it and, you know, ask you, "What's your training data? How big is your model?" You know, and that kind of thing.
CHAD: Oh, so the questions investors or people know to ask aren't relevant.
PETER: Correct. And, you know, they bring in the AI experts, and then they say, "Well, what kind of deep learning, machine learning, or generative, or what transformer model are using?" And we say, "Well, we don't." And typically, that's kind of, "Oh okay, well, then it can't possibly work, you know, we don't understand it."
So, we just recently launched. You know, with all the excitement of generative AI now recently, with so much money flowing into it, we actually launched a major development effort. Now we want to hire an additional a hundred people to basically crank up the IQ. So, over the years, you know, we're working on two aspects of it: one is to continually crank up the IQ of the system, that it can understand more and more complex situations; it can reason better and be able to handle bigger amounts of data. So, that's sort of the technical part that we've been working on.
But then the other side, of course, running a business, a lot of our effort over the last 15 years has gone into making it industrial strength, you know, security, scalability, robustness of the system. Our current technology, our first version, was actually a SaaS model that we deployed behind a customer's firewall.
CHAD: Yeah, I noticed that you're targeting more enterprise deployments.
PETER: Yeah, that's at the moment because, financially, it makes more sense for us to kind of get off the ground to work with, you know, larger companies where we supply the technology, and it's deployed usually in the cloud but in their own cloud behind their firewall. So, they're very happy with that. You know, they have complete control over their data and reliability, and so on. But we provide the technology and then just licensing it.
CHAD: Now, a lot of people are familiar with generative AI, you know, it runs on GPUs and that kind of thing. Does the hardware profile for where you're hosting it look the same as that, or is it different?
PETER: No, no, no, it requires much less horsepower. So, I mean, we can run an agent on a five-year-old laptop, you know, and it doesn't...instead of it costing $100 million to train the model, it's like pennies [laughter] to train the model. I mean, we train it during our regression testing, and that we train it several times a day.
When starting a new project, we understand that you want to make the right choices in technology, features, and investment but that you don’t have all year to do extended research.
In just a few weeks, thoughtbot’s Discovery Sprints deliver a user-centered product journey, a clickable prototype or Proof of Concept, and key market insights from focused user research. We’ll help you to identify the primary user flow, decide which framework should be used to bring it to life, and set a firm estimate on future development efforts.
Maximize impact and minimize risk with a validated roadmap for your new product. Get started at: tbot.io/sprint.
CHAD: So, you mentioned ramping up the IQ is a goal of yours. With a cognitive model, does that mean just teaching it more things? What does it entail?
PETER: Yes, there's a little bit of tension between commercial requirements and what you ultimately want for intelligence because a truly intelligent system, you want it to be very autonomous and adaptive and have a wide range of knowledge. Now, for current commercial applications we're doing, you actually don't want the system to learn things by itself or to make up stuff, you know, you want it to be predictable. So, they develop and to ultimately get to full human-level or AGI capability requires a system to be more adaptive–be able to learn things more.
So, the one big change we are making to the system right now is natural language understanding or English understanding. And our current commercial version was actually developed through our—we call them AI psychologists, our linguists, and cognitive psychologists—by basically teaching it the rules of English grammar. And we've always known that that's suboptimal. So, with the current version, we are now actually teaching it English from the ground up the way a child might learn a language, so the language itself. So, it can learn any language.
So, for commercial applications, that wasn't really a need. But to ultimately get to human level, it needs to be more adaptive, more autonomous, and have a wider range of knowledge than the commercial version. That's basically where our focus is. And, you know, we know what needs to be done, but, you know, it's quite a bit of work. That's why we need to hire about 100 people to deal with all of the different training things. It's largely training the system, you know, but there are also some architectural improvements we need to make on performance and the way the system reasons.
CHAD: Well, you used the term Artificial General Intelligence. I understand you're one of the people who coined that term [chuckles] or the person.
PETER: Yes. In 2002, I got together with two other people who felt that the time was ripe to get back to the original dream of AI, you know, from 60 years ago, to build thinking machines basically. So, we decided to write a book on the topic to put our ideas out there. And we were looking for a title for the book, and three of us—myself, Ben Goertzel, and Shane Legg, who's actually one of the founders of DeepMind; he was working for me at the time. And we were brainstorming it, and that's what we came up with was AGI, Artificial General Intelligence.
CHAD: So, for people who aren't familiar, it's what you were sort of alluding to. You're basically trying to replicate the human brain, the way humans learn, right? That's the basic idea is --
PETER: Yeah, the human cognition really, yeah, human mind, human cognition. That's exactly right. I mean, we want an AI that can think, learn, and reason the way humans do, you know, that it can hit the box and learn a new topic, you know, you can have any kind of conversation. And we really believe we have the technology to do that.
We've built quite a number of different prototypes that already show this kind of capability where it can, you know, read Wikipedia, integrate that with existing knowledge, and then have a conversation about it. And if it's not sure about something, it'll ask for clarification and things like that. We really just need to scale it up. And, of course, it's a huge deal for us to eventually get to human-level AI.
CHAD: Yeah. How much sort of studying of the brain or cognition do you do in your work, where, you know, sort of going back and saying, "Okay, we want to tackle this thing"? Do you do research into cognition?
PETER: Yeah, that's a very interesting question. It really gets to the heart of why I think we haven't made more progress in developing AGI. In fact, another white paper I published recently is "Why Don't We Have AGI Yet?" And, you know, one of the big problems is that statistical AI has been so incredibly successful over the last decade or so that it sucked all of the oxygen out of the air.
But to your question, before I started on this project, I actually took off five years to study intelligence because, to me, that's really what cognitive AI approach is all about is you start off by saying, what is intelligence? What does it require? And I studied it from the perspective of philosophy, epistemology, theory of knowledge. You know, what's reality? How do we know anything?
PETER: How can we be sure? You know, really those most fundamental questions. Then how do children learn? What do IQ tests measure? How does our intelligence differ to animal intelligence? What is that magic difference between, you know, evolution? Suddenly, we have this high-level cognition. And the short answer of that is being able to form abstract concepts or concept formation is sort of key, and to have metacognition, to be able to think about your own thinking.
So, those are kind of the things I discovered during the five years of study. Obviously, I also looked at what had already been done in the field of AI, as in good old-fashioned AI, and neural networks, and so on. So, this is what brought me together. So, absolutely, as a starting point to say, what is intelligence? Or what are the aspects of intelligence that are really important and core?
Now, as far as studying the brain is concerned, I certainly looked at that, but I pretty quickly decided that that wasn't that relevant. It's, you know, you certainly get some ideas. I mean, neural networks, ours is kind of a neural network or knowledge graph, so there's some similarity with that. But the analogy one often gives, which I think is not bad, is, you know, we've had flying machines for 100 years. We are still nowhere near reverse engineering a bird.
PETER: So, you know, evolution and biology are just very different from designing things and using the materials that we need to use in computers. So, definitely, understanding intelligence, I think, is key to being able to build it.
CHAD: Well, in some ways, that is part of the reason why statistical AI has gotten so much attention with that sort of airplane analogy because it's like, maybe we need to not try to replicate human cognition [chuckles]. Maybe we need to just embrace what computers are good at and try to find a different way.
PETER: Right, right. But that argument really falls down when you say you are ignoring intelligence, you know, or you're ignoring the kind of intelligence. And we can see how ridiculous the sort of the current...well, I mean, first of all, let me say Sam Altman, and everybody says...well, they say two things: one, we have no idea how these things work, which is not a good thing if you're [chuckles] trying to build something and improve it. And the second thing they say...Demis Hassabis and, you know, everybody says it, "This is not going to get us to human-level AI, to human-level intelligence."
They realize that this is the wrong approach. But they also haven't come up with what the right approach is because they are stuck within the statistical big data approach, you know, we need another 100 billion dollars to build even bigger computers with bigger models, you know, but that's really --
CHAD: Right. It might be creating a tool, which has some uses, but it is not actual; I mean, it's not really even actual artificial intelligence --
PETER: Correct. And, I mean, you can sort of see this very easily if...imagine you hired a personal assistant for yourself, a human. And, you know, they come to you, and they know how to use Excel and do QuickBooks or whatever, and a lot of things, so great. They start working with you.
But, you know, every now and again, they say something that's completely wrong with full confidence, so that's a problem. Then the second thing is you tell them, "Well, we've just introduced a new product. We shut down this branch here. And, you know, I've got a new partner in the business and a new board member." And the next day, they come in, and they remember nothing of that, you know, [chuckles] that's not very intelligent.
CHAD: Right. No, no, it's not. It's possible that there's a way for these two things to use each other, like generating intelligent-sounding, understanding what someone is saying and finding like things to it, and being able to generate meaningful, intelligent language might be useful in a cognitive model.
PETER: We obviously thought long and hard about this, especially when, you know, generative AI became so powerful. I mean, it does some amazing things. So, can we combine the technology? And the answer is quite simply no. As I mentioned earlier, we can use generative AI kind of as an API or as a tool or something. You know, so if our system needs to write a poem or something, then yes, you know, these systems can do a good job of it.
But the reason you can't really just combine them and kind of build a Frankensteinian kind of [laughs] thing is you really need to have context that you currently have fully integrated. So you can't have two brains, you know, the one brain, which is a read-only brain, and then our brain, our cognitive brain, which basically constantly adapts and uses the context of what it's heard using short-term memory, long-term memory, reasoning, and so on.
So, all of those mental mechanisms of deep understanding of context, short-term and long-term memory, reasoning, language generation–they all have to be tightly integrated and work together. And that's basically the approach that we have. Now, like a human to...if you write, you know, "Generate an essay," and you want to have it come up with maybe some ideas, changing the style, for example, you know, it would make sense for our system to use a generative AI system like a tool because humans are good tool users. You know, I wouldn't expect our system to be the world chess champion or Go champion. It can use a chess-playing AI or a Go-playing AI to do that job.
CHAD: That's really cool. You mentioned the short-term, long-term memory. If I am using or working on a deployment for Aigo, is that something that I specify, like, oh, this thing where we've collected goes in short term versus long term, or does the system actually do that automatically?
PETER: That's the beauty of the system that: it automatically has short and long-term memory. So, really, the only thing that needs to be sort of externally specified is things you don't want to keep in long-term memory, you know, that for some reason, security reasons, or a company gives you a password or whatever. So, then, they need to be tagged.
So, we have, like, an ontology that describes all of the different kinds of knowledge that you have. And in the ontology, you can tag certain branches of the ontology or certain nodes in the ontology to say, this should not be remembered, or this should be encrypted or, you know, whatever. But by default, everything that comes into short-term memory is remembered. So, you know, a computer can have photographic memory.
CHAD: You know, that is part of why...someone critical of what they've heard might say, "Well, you're just replicating a human brain. How is this going to be better?" And I think that that's where you're just...what you said, like, when we do artificial general intelligence with computers, they all have photographic memory.
PETER: Right. Well, in my presentations, when I give talks on this, I have the one slide that actually talks about how AI is superior to humans in as far as getting work done in cognition, and there's actually quite a number of things. So, let me first kind of give one example here. So, imagine you train up one AI to be a PhD-level cancer researcher, you know, it goes through whatever training, and reading, and coaching, and so on.
So, you now have this PhD-level cancer researcher. You now make a million copies of that, and you have a million PhD-level cancer researchers chipping away at the problem. Now, I'm sure we would make a lot more progress, and you can now replicate that idea, that same thinking, you know, in energy, pollution, poverty, whatever, I mean, any disease, that kind of approach. So, I mean, that already is one major difference that you make copies of an AI, which you can't of humans.
But there are other things. First of all, they are significantly less expensive than humans. Humans are very expensive. So much lower cost. They work 24/7 without breaks, without getting tired. I don't know the best human on how many hours they can concentrate without needing a break, maybe a few hours a day, or six, maybe four hours a day. So, 24/7.
Then, they can communicate with each other much better than humans do because they could share information sort of by transferring blocks of data across from one to the other without the ego getting in the way. I mean, you take humans, not very good at sharing information and discoveries. Then they don't have certain distractions that we have like romantic things and kids in schools and, you know.
CHAD: Although if you actually do get a full [laughs] AGI, then it might start to have those things [laughs].
PETER: Well, yeah, that's a whole nother topic. But our AIs, we basically build them not to want to have children [laughs] so, you know. And then, of course, things we spoke about, photographic memory. It has instantaneous access to all the information in the world, all the databases, you know, much better than we have, like, if we had a direct connection to the internet and brain, you know, but at a much higher bandwidth than we could ever achieve with our wetware.
And then, lastly, they are much better at reasoning than humans are. I mean, our ability to reason is what I call an evolutionary afterthought. We are not actually that good at logical thinking, and AIs can be, you know.
CHAD: We like to think we are, though.
PETER: [chuckles] Well, you know, compared to animals, yes, definitely. We are significantly better. But realistically, humans are not that good at rational, logical thinking.
CHAD: You know, I read something that a lot of decisions are made at a different level than the logical part. And then, the logical part justifies the decision.
PETER: Yeah, absolutely. And, in fact, this is why smart people are actually worse at that because they're really good at rationalizations. You know, they can rationalize their weird beliefs and/or their weird behavior or something. That's true.
CHAD: You mentioned that your primary customers are enterprises. Who makes up your ideal customer? And if someone was listening who matched that profile and wanted to get in touch with you, what would they look like?
PETER: The simplest and most obvious way is if they have call centers of 100 people or more—hundreds, or thousands, tens of thousands even. But the economics from about 100 people in the call center, where we might be able to save them 50% of that, you know, depending on the kind of business.
CHAD: And are your solutions typically employed before the actual people, and then they fall back to people in certain circumstances?
PETER: Correct. That's exactly right. And, you know, the advantage there is, whatever Aigo already gathers, we then summarize it and pop that to the human operator so that, you know, that the customer --
CHAD: That's great because that's super annoying.
PETER: It is.
PETER: It is super annoying and --
CHAD: When you finally get to a person, and it's like, I just spent five minutes providing all this information that you apparently don't have.
PETER: Right. Yeah, no, absolutely, that's kind of one of the key things that the AI has that information. It can summarize it and provide it to the live operator. So that would be, you know, the sort of the most obvious use case.
But we also have use cases on the go with student assistant, for example, where it's sort of more on an individual basis. You know, imagine your kid just starts at university. It's just overwhelming. It can have a personal personal assistant, you know, that knows all about you in particular. But then also knows about the university, knows its way around, where you get your books, your meals, and, you know, different societies and curriculum and so on. Or diabetes coach, you know, where it can help people with diabetes manage their meals and activities, where it can learn whether you love broccoli, or you're vegetarian, or whatever, and help guide you through that. Internal help desks are another application, of course.
CHAD: Yeah. I was going to say even the same thing as at a university when people join a big company, you know, there's an onboarding process.
PETER: Exactly. Yeah.
CHAD: And there could be things that you're not aware of or don't know where to find.
PETER: Internal HR and IT, absolutely, as you say, on onboarding. Those are other applications where our technology is well-suited.
And one other category is what we call a co-pilot. So, think of it as Clippy on steroids, you know, where basically you have complex software like, you know, SAP, or Salesforce, or something like that. And you can basically just have Aigo as a front end to it, and you can just talk to it. And it will know where to navigate, what to get, and basically do things, complex things in the software.
And software vendors like that idea because people utilize more features of the software than they would otherwise, you know. It can accelerate your learning curve and make it much easier to use the product. So, you know, really, the technology that we have is industry and application-agnostic to a large extent. We're just currently not yet at human level.
CHAD: Right. I hope you get there eventually. It'll be certainly exciting when you do.
PETER: Yes. Well, we do expect to get there. We just, you know, as I said, we've just launched a project now to raise the additional money we need to hire the people that we need. And we actually believe we are only a few years away from full human-level intelligence or AGI.
CHAD: Wow, that's exciting. So, if the solution that you currently have and people want to go along for the journey with you, how can they get in touch with Aigo?
PETER: They could contact me directly: email@example.com. I'm also active on Twitter, LinkedIn.
CHAD: Cool. We'll include all of those links in the show notes, which people can find at giantrobots.fm.
You can find a complete transcript for this episode as well at giantrobots.fm.
Peter, thank you so much for joining me. I really appreciate it and all of the wisdom that you've shared with us today.
PETER: Well, thank you. They were good questions. Thank you.
CHAD: This podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. Thanks for listening, and see you next time.
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.Support Giant Robots Smashing Into Other Giant Robots