Ivan Kirigin is an engineer turned founder and now investor.
He’s been in silicon valley since 2008. In this conversation, we were able to cover a lot of ground starting from why he’s bullish on self-driving, to the exciting future of health care to why most people don’t get GPT-3.

These days, Ivan’s fund tango vc invests in cutting edge robotics and machine learning startups focused on solving big problems at scale.
I learned a ton from this and I’m sure you will too.


Listen Now:  Apple Podcasts  |   Spotify


Shownotes:

Ivan's VC Firm:
https://storycreatorapp.com/

Ivan's Twitter:@ikirigin

Thanks so much for listening. If you like this episode, please subscribe to the Addicted To Learning podcast and rate and review.

Transcript

But until you prove something is safe, you can't use it. That is fucking nuts, like a human machine combination. And that's an incredibly exciting, uh, combo because if you have a human that is enhanced via machine and you don't need some neuro link implant to be able to do that, Ivan Kirigin engineer turned founder, and now investor he's been in Silicon Valley since 2008.

And in this conversation, we were able to cover a lot of ground starting from why he's bullish on self-driving to the exciting future of healthcare, to why most people don't really get GPT three these days. Ivan's fine. On tango VC, invest in cutting edge robotics and machines, learning startups, focused on solving big problems.

At scale, personally, I learned a ton from this and I'm sure you will too. That being said, here's my conversation with Ivan Kerrigan. Yeah. So we're just talking about, um, how you got into, uh, tech, how you was an Oh eight that you got started in the Bay area. Is that right? Yeah. So let me, let me go back a little further.

I. Started like most kids that are kind of nerdy being very into Saifai. And then you think commander data and star Trek, or you see Skynet and Terminator or, uh, blade runner. There's just a million ways that AI is shown a film and I'm a big fan of movies and Saifai, and I thought, I want to go work on that.

And I thought it'd be pretty good at it because I was good at math and physics. I didn't know much of anything actually in high school. I remember having a friend in academic decathlon that knew how to convert hexadecimal to binary or to. Uh, to decimal. And, um, that was hilarious because it seemed like magic to me.

Like what the fuck is this guy talking about? Uh, as far as the translation, I just literally knew nothing about computers though. That was like my senior year in high school, but then I majored in computer science. And so I started to code in undergrad and very quickly, when you get into computer science, you realize.

Nothing works. This is actually what you expect to see from scifi is incredibly hard. And so then I went into robotics with a focus on computer vision in grad school for Carnegie Mellon, and similarly within computer vision, this is before deep learning. And just to give you a sense of it, detecting a face in a photo was possible, but relatively hard understanding what was in a scene.

So for example, who is that person or even saying, besides there's a face, if you saw a pedestrian, like as a pedestrian over there, right? That's a cyclist or that's a car or that's a tree that's stuff is really hard. And then, so around 2007 or so, I was working on a DARPA urban grand challenge, which is this competition to get robotics, uh, autonomous vehicles working in a city setting.

And then, um, you know, there's fun project, but then I basically decided they weren't going to be new products and robotics for awhile. And so the did something totally different, a web payments startup that was in. YC winter road is called tip joy, and I'll just whirl that wind through the timeline here.

So, uh, start up, didn't work out when to go on. Facebook got obsessed with growth. Cause had a chip on my shoulder about it, then worked at Dropbox early. So I was like number 20 or so 22 and they 12 bucks and two years for the record have not 12 X since, but. That's a whole debatable story as to why, uh, you know, there's a lot of tactics I would have tried, uh, if unfettered and then, uh, did a second startup called YesGraph that focused on social graph analysis first within recruiting and then within a flow to make a product like their growth better.

So for example, when you see an invite flow in a product like clubhouse of the people that you're connected to, who should you connect with? That's a product Twitter, Lyft acquired. And so I worked at Lyft for a little bit on the growth team on acquisition, then back into robotics at Lyft level five and, uh, worked on simulation and scenarios.

And then, uh, late 2019 sort of angel investing. Full-time leaving Lyft. And now I am a, uh, VC with a fund called tangled, obviously that focuses on ML and robotics. And, you know, there's so many things to talk about there. As far as having a new fund, investing the tech, there's a lot we could dive into and I know there's a whirlwind, so I covered 12 years in one minute.

I'm like, wow. Um, Like the fact that you got this into like two minutes, um, I probably would have had to, to rehearse this a bunch of times the context. I don't know if I'm allowed to say this, but let me just say, when you raise a fund, you pitch and part of the pitch is that you talk about your history.

And so this is, this is something I've said dozens and dozens of times over the last few months. Yeah. Yeah. And are you, are you allowed to talk about, um, kind of like the numbers in your fund? Yeah, sure. So I think specifically what you're not allowed to say is, Hey, I'm quoting now I'm quoting something hypothetical.

You're not allowed to say, Hey, I have a new fund. Come invest in it. And there are rules against that. And actually some of the things you're saying with rolling funds, they are allowed to market themselves. This is why you never see like Sequoia or Andreessen Horowitz publicly say, Hey, we're raising a fund.

Why don't you hedge funds or private equity or endowments or whoever else come and invest in it. You're not allowed to. And the reason is the sec doesn't believe, uh, And people being intelligent and they don't think basically it's a nanny state where you're literally not allowed to invest. And especially because companies are staying private longer and most of the growth is in tech.

I cannot emphasize enough how inequitable this is that normal people can't invest in the highest growing sector. So the, and that's a whole rant I could have about the sec. And I think it'll be pretty clear how. Libertarian. I am from some of my comments. If you want to talk about health care, first comment is on the FDA killing more people than it saves.

Um, so there is though, uh, this idea that you have a fund, you can't publicly talk about it. Unless every investor you bring on board, you literally check like a bank balance to see if they're rich enough, does Le you literally have to be rich enough to be able to invest. And if you do that a stronger check, then you are allowed to market publicly, because that means, instead of saying, for example, filter who you talk to and then let everyone in.

Uh, where you only tried to talk to credit investors, you have a filter after you get people in the door. So that's what the difference between rolling funds and not. And so, because I don't have a rolling fund, I don't think I'm allowed to talk about fundraising. Um, but I can't say things like our target is, um, a small fund and we would write a few hundred K checks and we're raising and deploying at the same time, which means that we are, um, we're writing active checks now, um, while.

Doing the rest as well. I think I it's like a landmine here because I'm not exactly sure what I'm allowed to say or not. Uh, so, uh, but the fund is actively deploying. I'll say that it's so we have some funds raised in our writing texts. Yes. And can you talk about, uh, some of your latest investments? Sure.

There is, there's a lot here and sort of the nature of this is that it's a little frenetic because the basic thesis of the fund is that. ML and robotics and automation are going to be a big deal. And I could talk about 10 dimensions on that. So for example, we're recently saying you've seen GP T3 where you have, uh, uh, Texts processing that appears to be really good.

I don't think people have thought through what this means. So for example, think about a marketing job and I know a lot about marketing from growth. So you have a copywriter, you have a person that might write ad copy. You have blog posts for content marketing. You have even support. Could we consider like an onboarding step in marketing?

And all of those are basically a matter of. Uh, like a text Q and a interface or, uh, rehashing the same ideas. So what you say on your company homepage should match up what you say in the blog. It should match up what you say and it customer support. And so I think one of the first things that are going to change as far as white collar jobs that change with.

Uh, this kind of text processing, isn't marketing where there's a term called Sentara that I like a lot it's from chess. So after Gary Kasparov was defeated, uh, had this idea of humans and machines working together. And if for a little while, they were actually better than the machines. So you have a human machine hybrid and, uh, that within chess, it was clear that that was just a matter of time before the machines.

Are not aided by the humans. Uh, the machines alone are gonna be better, but I think especially when it comes to something like marketing and psychology or convincing somebody of something writing, generally, you're going to have a human touch on that for a long time. Anyone that's interfaced with GPC three would say that it clearly breaks down at some point.

So the idea there is that for a little while, we're going to have marketers and other kinds of people that write interface with software and that's like pure software and that's awesome. But then you have the other end of the spectrum in robotics. And I think most people within ML don't quite understand robotics and you can, you can break it up into categories and you get distracted by autonomous vehicles and how long they've taken the going through this hype hype cycle of they're around the corner.

And when I was a seed investor in cruise, before they did YC is my. First angel investment and the best one that I did. So having, you know, a 62 X return in just three years on, uh, angel investment is pretty much like the best you could possibly hope for. And, uh, w they still haven't launched. Just the highlights.

So that was 2014 or so, uh, when they did YC. And so now it's six years later, seven years later, and they're still working. And the problem, I think, to understand this is around how much autonomy is there. So within self-driving, there's this concept of, uh, odd, the operational design domain, which is to say, where is this thing supposed to work?

So you can imagine something that is not able to work in rain or snow or in roundabouts or different scenarios like that. So when you think about full autonomy, you actually have to cover a huge range here. And so what I'm pretty excited about and robotics is what, if you didn't have to cover that huge range, what if you only have to do a little bit?

So I could talk about a few different competence. One of them is Teleo, which is some friends from Lyft that's, we're working on self-driving at level five are now working on automating heavy construction equipment. And so they have things like, uh, a loader that. You move a bunch of dirt. And these are really big vehicles, like 10 foot tall wheels, uh, from one part of the construction site to another.

But you can for example, a level area, and that's a really laborious task. And, uh, just from Lyft, you might consider utilization of that driver and how, uh, how they're used. And if so, somebody can tell you operate that that alone increases productivity because they can go from site to site, vehicle to vehicle.

But then you can imagine limited autonomy, like, Oh, can this thing. Like the hard part is, uh, scooping and dumping and the driving in between where you scooped and dumped is actually relatively easy and it's a construction site. So it's, you know, relatively controlled environment, uh, compared to the open road.

And so you can have limited autonomy and the important bit with the teleoperation is that that experience lets you then train the machine. So you actually have customers paying you to deploy a product. In order to have the machine be able to do what needs to do as opposed to spend a billion dollars driving for a hundred million miles with these operators, trying to make sure the system works if full autonomous.

So that's what Waymo has been doing for like a decade for a very long time. And that's why it's so expensive. Other examples of limited autonomy include just more of this kind of teleoperation. So one of the companies I'm invested in is a climate robotics, and I don't think that they want to dive into publicly what they do quite yet, as far as what the functionality is, but suffice it to say that a operator controlling multiple robots on a farm is actually what'll make it work, not full autonomy.

So just one person being able to drive or, you know, corral a dozen robots versus one that. That person's hourly wage. It is amortized across all those robots, which means then the product, the service operations go down dramatically. And actually another example of that is low commission, where this is a very fast growing company and highlights the ability of limited autonomy to make some money quickly.

So what they have as a human driving, a truck. With, uh, another truck following it, but as autonomous and there are laws that say that you can't drive more than 10 or 11 hours out of 24, just for safety for truckers. And the idea here is that they. Uh, it's actually a much limited, uh, autonomy to be able to follow a human cause there's always a human driving and the, then the back truck that has a driver in it that rests.

So they're autonomous and then they toggle. So within, you can imagine like a 20 hour shift, you've only driven 10, but both trucks got to where they need to go. And that increase in productivity is unheard of in trucking. Trucking, the industry is happy if they get 2% savings on fuel. So there's kind of, this kind of industry is so tight on the margins, but if you were to increase 30, 40%, the productivity, it's a, it's a game changer.

And what I like so much about this company is that they can launch the product and even announced a very large purchase order, uh, that highlights that they're, they're going to deploy this. They're actually going to ship something and instead of raising a billion dollars, From SoftBank or whoever else is going to fund this.

They, uh, they can get customers to pay for it, which has a seed investor is awesome because you don't get diluted. And generally having something real in the market is what people want. They want to ship. So a lot of engineers, uh, self-driving car companies are very frustrated by just, you know, we want a product in the market.

You want to impact people's lives. But if the first thing you attack is a robot taxi, it's just very hard to get it. And I'm very confident about the ability to get it. Uh, but it's not, um, it's not as fast as people want. Let me ask because I mean, I studied computer science, myself. I'm, I'm a technical person, but for kind of like maybe some non-technical members in the audience, um, what do you think, um, what do you think about the.

The timeline of self-driving cars. I know it's kind of like the question people always ask. I saw on your blog posts, you have strong opinions. Um, what, what's your take on that? So you can't look at timeline without thinking about the culture and regulatory framework around it. And so there's this concept of the precautionary principle from mainly from Europe, which is the idea that until you prove something is safe, you can't use it.

That is fucking nuts. And it's so wrong on so many levels. One of the primary ways it's wrong is it doesn't match up with progress in the past at all. So we did not develop the innovation that we love today, and this is not small innovation. This is absolutely enormous. So for example, um, In the seventies, there was a talk of a population bomb that you would have hundreds of millions of people starving in countries like India and China.

And that didn't happen. Why? And if people don't know the answer to that question, they have no right to talk about progress because if they don't even know what saved hundreds of millions of lives within their lifetime, then they really need to hit the books and study this. And so the answer is that when it comes to innovation, you need to accept.

That there's going to be a lot of experimentation, and I'm not saying you should be anti-regulation, but the idea that you need to pre-approve, everything is safe before you launch it is ridiculous. And the example I like to give is teenage drivers. So this is not hypothetical teenage drivers kill people, including themselves.

This is, this is very, very, very well known. And I'll think about the verification we do for a marginal teenage driver where like, can do we, are we really sure they're not going to be irresponsible? Do we, for example, check, do we even ask them, Hey, have you ever had a beer? Are you, are you the kind of team that drinks or not?

This is the kind of thing that would be huge. It can imagine things like verification. Like, Oh, every time you get in a car, you use a breathalyzer because the data is very clear that drunk people get in car accidents. We don't even have things like other. Measures of knowing, like, if I'm looking at my phone, which you see constantly my favorite activity these days when I see a bad driver is just to pop in, you know, like when I'm driving by them and to see, or they're looking at their phone and like 90% of the time.

Yes. So we essentially have effectively drunk people all the time because there's billions of these phones everywhere and every driver owns one. And so I highly all this to say that the idea that you need to only launch after you're sure everything is safe. Doesn't even match up with what we're doing today with human drivers, it doesn't match up.

So I don't think it's a fair comparison. And so as far as when we will launch, versus when we should launch, those are two different questions. And I think we have more than enough data to say that we should have already launched the self-driving. And, uh, the Federalist system in the U S is actually helping out here because different States have different policies that allow you to have more liberal deployment.

And so a bet I made, I want to say three or four years ago is that over, under bed and the structure of over under vet and people don't know, it's one of my favorites where. You might think something's going to happen? Like, Oh, self-driving will come. That's not the question. The question is when, and so you have to find the point of disagreement where one person would say it's going to be after this date.

And the other room say it's before this date. And I was, uh, so it's 2017 or so I'm like I'm under January 1st, 2020. And in December, uh, 2019, uh, Google launched a. Uh, the ability to go like on a rideshare network, uh, that's a limited scope within a city within, I think it was Phoenix, uh, or another, uh, another kind of desert town.

And that's relevant for how hard this is. And, uh, there's no driver in the car. And I think that threshold of there's no driver matters quite a bit because then the operations of self-driving are, are dramatically change. So if you think about a Lyft ride, let's throw some rough numbers out there. It might be.

$15 for a ride. And of that 10 to 12 goes to the driver. $2 is for insurance. And so the demand curves slope downward. And if you were to make that Lyft ride $5 instead of 15, then it would take over all driving and owning a car. Wouldn't make that much sense. So the reason for self-driving is very, very clear.

And, um, I might add that every single passenger on every single bus ride. Is getting, uh, at least a three to $5 subsidy per ride on what they're doing. And so if you think about that, adding together, actually what should happen is that, you know, you should have all public transit, at least buses replaced by shared electric autonomous vehicles.

So as far as the timeline though, I think that's, we're going to start seeing it ramped up. And there's a question of which cities are going to be faster. I do want to highlight a difference in approach of two different companies. So Waymo is. Being very cautious in relatively easy areas. And so they are.

So when it comes to these deserty towns, there are very clear lane lines. Everything was Bryce is never rains. Uh, there's a huge roads with big sidewalks and big parking lots, which means construction sites. Don't impinge upon the road. There's very few cyclists because it's the driving culture and cruise is having a very different approach where they want to tackle the hardest cities first.

And so they are in San Francisco. With the construction sites and the cyclists and the tight roads and crazy windy streets and their thinking is if we're going to spend, you know, if we want to drive a marginal mile, how hard is that? Cause we want it to be as hard as possible. So we learn as fast as possible.

So I think Waymo will be able to scale in easy towns relatively quickly. And crews will be able to be in more towns faster because they're doing the hard stuff first. So timeline wise, they could, they can honestly do it now. And isn't really a matter of like the regulatory, uh, concern around this. And you might have some governor on it to say, Hey, don't go faster than 25 miles an hour or a state of these roads.

And so I think you'll start to see that ramp up at COVID is a huge issue here where. You have the ride share tanking and people just aren't going places. And so the, the, the engine that was pulling all of this is also impinged that I think by the end of 2021, you'll see that unlock as well. So, um, What'd you pay your, what?

You bet your ma that up. Jesus, what you put your money on. Um, self-driving cars first becoming ubiquitous and sort of third tier cities, smaller cities, and then kind of like scaling into bigger metros. Um, No. I think San Francisco and LA and big cities are going to be first where they want to tackle it.

There's the economics of rideshare and the denser, the city, the better it is. So just to some math on that is if you have three rides in an hour versus two than the hourly rate of the driver is going to be more a function of how many rides they do, then how much they get. Paid per ride, which means that if you have hard density, drivers get paid a lot better.

And that means that passengers pay less. So it's actually cheaper to do ride share in denser areas. And, uh, so definitely like first and second tier cities will be first. It might not be Manhattan, uh, at first, but having a big city in a desert area I think is going to be both easy. And actually LA is not that far from that.

So San Francisco is harder than LA. Um, but all of these cities will come around the same time. And as far as how fast they scale, uh, to keep in mind the timelines we're talking about here. So you have test vehicles that are different than production vehicles. So GM announced the cruise origin, which if you know anything about manufacturing and cars, you know, it takes years and years and years.

And so announcing that I want to say 18 months ago or so, um, that is very exciting. But then in 18 months it might launch and you see this everywhere. You see a cyber truck. Uh, event, and then two years later you could buy one. So there's just years in between this idea. And the important thing about the origin is that designed for autonomy.

So it has no steering wheel. It can go in two directions has just four seats with, uh, sort of these, uh, sliding doors on either side. So passengers can get in and out. It's designed for electric, autonomous shared. And so I expect that to ramp. I'm going to hope it launches in 2022, but I don't think it's gonna be 2021.

And, and do you think that's going to be basically a, uh, fully self-driving car and like the entire say hello, Metro LA Metro area? Or do you think it's more like a very limited scope? Say like only up and down, um, Sunset or something. Yeah. So you can launch that limited scope and that would actually be useful.

And that's the bus lines that people have. Right. So it's not that weird to imagine bus lines being replayed, not at all. It's just like people don't take buses in the U S well, there's a density issue here. And so I went to UCLA before transferred to NYU before Carnegie Mellon. And one summer I lived on the East side and went to the West side.

So I've taken the bus on sunset. Back and forth. And it is, there's a very strong reason why everyone drives the bus is like a miserable experience. Um, and fun side note. Every single day, I heard somebody saying like overhearing conversation with the bus of I'm from. Nebraska or wherever else in the country.

Sometimes outside the United States, I'm currently a waiter or waitress, whatever I'm going to make it in Hollywood. And so it's just interesting. There's like guys that once you get in LA you ride the bus on sunset every single day, you'll hear somebody trying to make it in Hollywood. Um, which is incredible.

How. So the difference between status and, uh, positive sum. So it's status is zero zero-sum somebody on top of to make sure somebody lower on the ladder, but in tech, everyone could build. And that's just very exciting. Uh it's is like cultural question there, but in terms of self-driving, as far as one thing is a lot.

Yeah. I think that's, uh, you can see the tech required to launch in a limited way. On one road is sufficient to do other roads. There are some infrastructure under the hood. So for example, Tesla doesn't do this, but other companies will have, uh, HD maps as they're called. And so those are 3d maps of the entire environment.

And that's really important because it means you're kind of running on rails. Where you, you know exactly where you are when you localize to the map and you, you know, the route to take and where you want to go. That is very different than I plop a car down at an intersection. It's never seen it. Doesn't have a map and it needs to know where to go.

So Tesla is taking a very different approach for a bunch of reasons. Uh, and one of them is just what data that they have under the hood. So Tesla has a bunch of cameras and a little bit of radar, but no LIDAR. And, uh, let me explain some of these, um, radar is a really old technology where you bounce a radio wave.

Out and then see what reflects back. And some metal objects are very easy to see and generally thick objects are easy to find. And so if you want to find a tire under the car, or even a cyclist or a human radar can find it, but it's a low resolution. Cameras are 2d. And so not active sensors where you can see what's there, but they need to interpret what's in the scene.

And actually the depth perception humans have it's depth perception with a stereo is a function of how wide the baseline of the cameras are. It kind of makes sense. You'd see more depth at a greater distance. If cameras are really far apart and humans are optimized for arm length, depth. So I, uh, that makes sense, right?

Like you want to work with your hands. You don't want to see what's out there, but if you're going beyond like 50 yards, you don't actually see any depth. And it's more a comprehension of what's in the scene, which actually gives you hope for monocular. Camera's being able to find depth. And so you can use all these systems to be able to train, like with deep learning, the, a single camera to say, this is how the 3d scene actually looks like.

And, um, the reason you would not build a map is because it's not necessarily very reliable. You cure is much, much, much lower resolution than what you get with a LIDAR. LIDAR is like radar, but instead of, um, you know, radio waves, it's, uh, Uh, infrared or other parts of the EMS spectrum, which makes it more just the laser being, uh, shot out.

And then it's time of flight device. So you literally shoot out a laser and see how long it takes the bounce back. Um, And that means if you have a bunch of lasers and a spinning nearer, you can basically sweep a whole environment with these lasers. And you're able to scan in 3d very quickly. So every, you know, 20 times a second, you have a thousand data points that are 3d around you.

And, um, one of my investments is in this space. So it's a company called red leader and they're awesome. They're there. Like HD or ultra 4k resolution compared to GameBoy when it comes to just how many points you get in LIDAR there? Um, not now a bit far field. So the point is Tesla does not need the maps, which means that they can go anywhere.

So the moment Tesla can do a four way stop sign in a residential neighborhood with trees. They'll do that in any city and they don't have a map per se, of where you want to go. Um, so that, that affects the timeline here. I do think Elon is a bit, um, A bit of a liar when it comes to how he says self-driving works.

The idea of calling it full self-driving is absurd. It's on the face of it. It doesn't actually do that. There are. Tiers and you hit level one level two level three, uh, tiers of self-driving level two is when the system can drive on its own, but it doesn't know when it's wrong. Level three is when the system knows it's wrong.

So it can alert the human take control. A Tesla is at level two autonomy system, which means it doesn't even know when it's wrong, which means if you're going to have had a problem on a freeway, you're going to die. Like it's, you're going to hit that thing and it's not going to work. So. Imagine if there is a, I would say a pallet on a road, uh, in front of the car, you're driving behind Tesla.

Can't see that, um, they hope to, they hope to update their system, but they can't, you know, I can, I can go off on this for a long time. So tell me more, what you hear about, no, I was gonna say like, um, uh, it reminds me of, um, bunch of studies and like experiments that we're doing with, uh, you know, CRISPR and gene editing.

And, um, then I think it was like, Couple of years ago, and then someone died and all of a sudden was this, you know, huge, you know, big, um, you know, shit storm. And then they had to shut everything down and basically set the entire field back like a decade. Um, so when it comes to healthcare, it's, it's interesting.

So just to highlight, I'm not a doctor, I don't know a lot of things. When it comes to health care, I do study innovation and I also studied government and economics. And so I definitely understand some parts of this. And I think the clearest example is with vaccines. Um, we had usable vaccines in February and March of 2020, and I think people need to let that sink in what that means.

That 500,000 people are dead. Because of the old protocol where we're following with the FDA approving a vaccine. That is nuts. We even have millions of people that work in frontline and the military that put their lives on the line routinely to help this country. And if we said, Hey folks, we want to test this vaccine.

We're gonna do a challenge trial. Do I challenge how it works? Get the vaccine. And we give you the disease to see if the vaccine worked. And that is incredibly aggressive, obviously, and it's not the normal protocol to follow. And the calcified. Infrastructure, sorry, the regulatory infrastructure around what we do means that we couldn't even think creatively enough to follow that.

Despite doing things that are unprecedented like lockdowns, we wouldn't do things like test things differently. And so instead of doing a challenge trial where you have a few people. You know, on the order of dozens or hundreds, we allow hundreds of thousands of people to die and everyone in the country to be impinged in their freedom and their livelihood, uh, with lockdowns that have been largely ineffective, of course, I'm a data scientist.

So the contrapositive here is hard to estimate. How different would things have been if they hadn't been locked down, but there are examples of. Schools that didn't opt down and no dramatic difference in case rate. And so I think there are the very specific things that you see there. So when it comes to CRISPR and other technologies like this, I think that people should be biased towards building things.

Just like we're delighted that there's a vaccine available even within a year, we should want it to solve these problems. And what that means is taking some risks and, you know, You have to be ethical about it. So if somebody is going to be in a challenge trial for a vaccine, you say, Hey, we're not sure how to treat this disease you might die.

And the thing is, I bet you'd have a hundred thousand Patriots in the U S alone saying, okay, sign me up because I don't want my grandma to die. Like, that's why they joined the military. It's what they do. And so there's this, there's this concept of, I mean, uh, Marc Andreessen said it well in terms of his time to build where, um, it's a mystery to me, why we've tolerated having such.

Terrible pace of innovation and a lot of different dimensions. When you think about you really care about in terms of progress, I care so much less about NBA, top shot and Ft. Like if you guys don't know, it's like a crypto thing of a, basically a baseball card at a digital form, and I just don't give a fuck about.

Baseball or NBA cards relative to when will my mom die? Like what, what the fuck are we focusing on? If not the things that will actually save lives. And then you go one level deeper and it's like, well, exactly. How does this progress go? Like hoof grants, you know, what funding who approves something to get in the market?

And, you know, so again, I'm a novice when it comes to the actual, um, the medicine, like the science of this, and I'm also an office when it comes to the regulatory side, but. Definitely not satisfied. It needs to be far, far faster. And at the same time you have those, um, exceptions where they can fast track new iterations of old, um, like for instance, uh, artificial hips where, um, it's like, hang on, you can fast drag vaccine development.

But like, you know, like an artificial hip, you can, you know what I mean? Yeah. How many examples here? So for example, when you test a cancer drug, even with all the trials that they do, the standard of performance is that you compare it to having no treatment, not the best cancer drug around. And so there are, there are multiple problems with how, uh, healthcare has done.

Another example is that health insurance companies, the average customer lifetime on the insurance companies, it's like three to four years, which means they are literally not incentivized to make you healthy. They just don't care. And, uh, I think that should change. And there's a, there's, there's so much here.

That's wrong. So for example, in world war II, we're talking, you know, 80 years ago when there were price controls on wages. Uh, because the government was much more involved in this kind of time production economy. Uh, all these employers were like, Hey, we need to be able to compete with how we, uh, we compensate people.

It's like, Oh, we'll make it tax deductible for you too. Cover other benefits like insurance. And that is the origin of why your employment is tied to your insurance. And we have not broken out of that. It's nuts that it takes 70 years. People think like, Wait, why do I lose my insurance when I lose my job?

Uh, and it's just this a stupid incentive structure we have related to price controls in a war. That's like, there's very few people alive today that fought it. So it's, it's one of these things that, um, it's the question of, why are we moving? So salt Lake, like why does it take decades and decades to change these stupid laws?

It it's very frustrating. Uh, and that, you know, honestly, that's policy, that's not related to. Why I'm healthier, unhealthy. It has nothing to do with the medicine. It's all about the incentives and the structure around it, but kind of like to flip it around a little bit. What do you think, what are you excited about in healthcare in 2021?

And what do you think the next say like five to 10 years of health technology will have in store for us? There's a lot of things that I think could get better if we just, uh, let them and a good example of telemedicine. I think we're just at the very beginning of this, it's funny that COVID required this, and then you realize like, Oh, if I wanted to have a doctor look at some rash or whatever, it just talk, uh, that doesn't actually require booking an appointment, waiting in an office and.

Then being around a bunch of other sick people. Like there's, there's a, there's a strong reason why you should do telemedicine. And I think we're just at the beginning of unlocking that. Um, and for context, like in the product context, you have a bunch of people or a bunch of industries that are getting cheaper and a bunch of they're getting more expensive.

So if you look at, uh, inflation, uh, education, housing and healthcare are all incredibly. They grow faster than the average and things like it. Uh, just anything touching software technology is getting cheaper and cheaper, uh, which is ironic because you would think that when you have telemedicine that, that like the, it would bleed into the other industries to make them more productive.

But that's not the case, actually, the GDP. Uh, sh the share of GDP for healthcare has gone from 11 to 19% in the past 20 years or so, and ask yourself if you think we're getting twice our money's worth, um, when it comes to, uh, healthcare is, is not, but there are some other technologies that are exciting.

Um, I think CRISPR is exciting. I think most people don't quite grasp what it means. So if I were to. I want to build a bridge. Like I can build a bridge and run a test and see how it goes. And I can do stress tests on reinforced concrete, and I could do also about the tests. Um, and one of the fundamental things with the genome, the idea is, Oh, let's map this.

And you think, okay, we're going to understand what genes do. What was like, well, no, just reading it out is not really a map because you want to know what does, what and how would you run that test? It's like, well, what if we ran a basic experiment and knocked one out and see what happens? It's like, Oh, okay.

Uh, what if it's a living vehicle, this is going to be an embryo. How do you do it? And actually CRISPR allows you to do that for the first time. And I think that is. Night and day, as far as just the engineering that you can do. So we have software engineering and we don't have gene engineering, um, because of this basic ability of being able to say, well, what if we changed something?

And so I think that there's going to be a huge shift because of that ability on the genetic side to see what maps to what, and when you dig in under the hood of what's going on, um, With health. There are just so many things that we have no idea about. So nutrition is probably the worst example of where we have no idea what you're eating and how it impacts you.

We have no idea what genes map to some of those sensitivities, uh, and, uh, even things psychologically, actually one, one abstraction that I was thinking of recently is around networks versus labels. And humans, you know, we're talking right now. People like to have labels for things. We have a technology called CRISPR.

We have a part of your brain called the hippocampus. That's related to memory, and we have a behavior around this person's forgetful, or this person is depressed and all these things, but think about what everything is fundamentally. Uh, on, uh, gene, uh, neuron and on a behavioral level, they're actually all networks and connections of things, which means that there's a whole network of genes that affect who you become.

You have an environment with a very diverse set of CMOs that then affects how your brain forms, which has also a network, uh, and. There isn't like a box. That's say the hippocampus has memory. That's really, really, really overly simplified. The brain is much more complicated with memory. And when it comes to behavior as well, like everything is a spectrum.

And so whatever I have kids that are dyslexic, or I might be a little add or a little, um, you know, Gary is, and more outgoing than other people are or more introverted, whatever. And we have these labels within psychology to describe things that are actually. Like spectrums of behavior within a network.

And I think one thing machine learning might be able to do is put this all together. Because if you think about what is going on with deep learning, um, the fact that we can't get labels out of things is a human problem. So we can have a system that can solve a problem, but we might not know what to call that.

And really that's not the problem like that really like the that's a human need to know what to name something. And actually what you care about is what is the protocol I should do differently? Like what should I eat differently? And so I think a system like machine learning. That can understand this network just by getting all the data and training something and abstracting it, it might actually be able to produce recommendations.

We can't even fathom right now. And that's, that's very, very exciting. You do need fundamental tools like gene editing. And I would also like to be able to edit the brain, but that's very, very hard. And this gets into neural link and other like brain machine interfaces where before we actually, Oh, sorry.

Sorry. No, go ahead, please. I was gonna say the, uh, uh, similar to CRISPR where we can try to build a little, a readout of what's there. We don't actually understand how to run experiments with the brain. We don't even have the readout because the density of our imaging is so much worse than is required. So th th the.

Okay. The number of connections within like a cubic millimeter of brain is so high. So we don't even have the readout let alone the ability to experiment. And so the experiment, the natural experiments kind of grotesque, natural experiments you hear about in psychology are relative to people that have cancer and needed a brain surgery to remove a part of their brain or.

My favorite example is Phineas gage, uh, where my son's name is Ben. So it's kind of funny. He was a construction worker that gods, uh, some rebar, I think, or it was a nail through his head and, uh, was fine apparently. Uh, He like did not die, uh, having, uh, like a nail go through his head, but his personality was entirely changed.

And then when you look like you became a much more gruff, like angry person, and you look at the parts of the brain, but that's what I mean by natural experiment. Somebody has a problem. You see the difference in their brain and you question what might happen on the other end of that. And, uh, having a personality change because you accidentally.

Destroyed a part of your brain is a ridiculous experimentation process. And if only we can edit these things, if you think about personality generally, like we have no control over any of this. And I have no idea, for example, how much. Like how aggressive do I sound right now, given how much I've slept or ate eaten and what have I eaten?

And what does my genes tell me about this? Like have a brain scan. And one of the companies I like the most recently is called Q bio. And I think they're going to go in a really strong direction here. So what QBO does is a full body MRI, like every six months along with full blood work and latest genetics and, um, you know, all these quantifiable metrics.

And so I've done it twice now, which means now I have a diff on a full body MRI where I can say like, Well, what is my lung capacity this year versus that like how big is my right left ventricle or whatever, like in my heart, um, and including parts of my brain. And so my dad got a glioblastoma, which is a brain tumor a few years ago.

And it's one of these things where like, can you detect it early on? And, uh, just the ability to take all, even capture all of this data and then hopefully do something about it. Long-term is incredibly exciting. We're just at the very beginning of this stuff, just like we need to be able to write it down.

Like we can't scan and find out what's going on. Yeah. Yeah. So basically like logging as much data as possible and then, um, you know, taking those inputs and, and kind of iterating. Right. So if we know, um, w we get the, the, the body scan once a year, once every six months, and then we can tweak it. Um, and, um, I think the.

Because I've been thinking about this a lot lately. I think the big challenge is to get those tools into the hands of billions of people. Because right now it's for the 1%, maybe even 0.1%. Um, but we need to be able to have those, uh, preventative tools in basically we need to get them into the homes of every.

American and obviously ideally every person on the entire planet, but, um, the challenge is how to do that and how to do that at scale. Yeah. I'm excited for a few things. Uh, and I think devices can do quite a bit, basically wherever it touches, healthcare is good in my opinion, because you get a lot more.

So for example, I'm wearing or ring right now, O U R a and it tracks my sleep. It tracks my resting heart rate. It tracks a bunch of different things. And so let's say I'm working on a cardio regimen and I want to get better at that. I'm also changing my sleep pattern. Like I don't. Look at screens sometime in the evening.

And I changed how much water I drank all these different protocols I got, I should go in and see how did that affect my sleep and attracts everything. Um, and it's just a ring. It's the best wearable I've ever had because it's, it just totally forgot about it. And it takes, you know, a few days to run out of batteries.

So it was really, really a high quality product. And if you think about where else in your life you could have that, uh, it's everywhere. And I get incredibly excited and this touches back on where I'm investing, where, um, I'm incredibly excited for all the things we could be looking at, but aren't, and I think a clear example here is, think about how many microphones are around you right now.

I think I have actually like a dozen within this room right now and within my house, probably like 20 or 30, or how many cameras you have now consider are they on? And there's of course the literal sense of are they streaming bits anywhere, but then there's the more logical sense of is anyone listening?

Like I forgot a human is any software listening and actually nobody's listening. So there's, there's, there's actually nobody looking or seeing, or hearing anything that's going on. And I think it's gonna change. And so now algorithms are trying to work to get things done and people are gonna start listening.

And so one of the products that I'm really excited about that is not just a Tingo, uh, uh, VC investment. It's also we're incubating it. So as an idea I had, and I found a team and they were excited about it too. And we're now like getting off the ground structurally. It also means tango has common stocks as a whole conversation around how, uh, How incubating a company works with NBC, but the basic idea is what if a machine could hear and see everything around you and it's more embodied.

So it's not some outsource task for Alexa to get done. It's more like, how can you be better? Like get feedback and specific thing we're targeting at the start is public speaking. So. When you're in a meeting and if you use filler words like, like, or, um, you should get feedback around that. And, uh, the product is really basic right now.

It's speech to text. So you have a window open while you're in a meeting and, uh, you pipe the speech to text and then detect when you say certain words that are filler and there's like an arm, there's a bunch of things. And there's all these stupid phrases, like at the end of the day. And you know, I look at it, both sides is all just like filler words that people have, uh, to avoid talking.

And the problem is you never actually get feedback about that. And so somebody might think you sound dumb because of the way you talk, but nobody's actually going to say. Hey, you sound up like, uh, like, uh, so having a machine that is on your side, trying to help you get better is the basic product. So if people go to rhetoric.app that can see the blog that I'll launch sometime soon.

And what I love about this is that start with something really basic. Like there are probably. 50 million parents in this country that would like their teenagers to say like less let alone the parents themselves that say it all the time. And so I think this market is absolutely enormous. And then where could it go?

And so I should have something whispered in my ear whenever I might be talking too long right now. Or am I talking too fast? Like where it's permitted is my tone too aggressive. And can I. Walk by somebody in the office and have their name whispered to me like, there's this concept of Dunbar's number within an organization where if you have, uh, you know, two, 300 people, at some point you stop being able to have relationships with all of them.

And so it roughly maps to the size of, uh, old tribes that humans used to be in. And. What, if you could change that? What if by being a little bit better about learning your peers and their habits and like establishing memories loops around that to get better, that Dunbar's number could bump to 500. That would actually be an enormous change for like all these companies.

And what it requires is this concept. I don't think I've mentioned yet of a center. Like a human machine combination. And that's an incredibly exciting, uh, combo because if you have a human that is enhanced by a machine and you don't need some neural link implant to be able to do that, you have microphones, you have cameras, you have screens, you have the ability to generate voice.

You have all these UI elements that are going to happen far, sooner than Neuralink. Uh, and so that's incredibly exciting to, to see where this might go, um, with just the interfaces that already have today. Yeah, for sure. For sure. Um, I'm noticing you are extremely smart, extremely knowledgeable about a wealth of different topics.

Um, and I'm just wondering, how do you download so much information? Do you have a process and if so, could you share it with the audience? Um, so I definitely. I have a habit of collecting things to read and not read them. So it's funny because my, my, what this feels like is there is a day luge of things to learn.

And I only grok, I only process a small fraction of it. Um, but I think it starts out with, um, so I start with purpose and I think this is actually the biggest problem with our education system. It does not matter what you get on the SATs. It matters a lot how much math, you know? And the sat is a reflection of that.

So to say, so if for, in one way it matters a lot. On the other way. It doesn't matter at all. Who cares about that score and where the difference is intrinsic motivation. So if you are learning in order to achieve something in life, you will work hard and then be able to achieve it. Not because you're just jumping through a bunch of hoops.

Like I need to. You know, do well on this test to get into a good college. And in college you need to compete to be able to land a good internship. And then after that, get a good job. And then there's some ladder of things and it's fundamentally misunderstanding life first off, because life is not about a destination that's absurd.

The last note in the symphony is not the point of the symphony. That's just a ridiculous way of thinking about the world. And also you need to have, I think, a clearer. Moral grounding. Like why do you exist? Because my, my view is that we need all hands on deck because we got some big problems. We need to get off the planet.

We need to like stop people dying. We need to create more easily like, uh, the star Wars, prequels, socked, why can't I make them easily? It's like, actually it takes thousands and thousands of people to make a good movie. So there's like this creative friction, um, even on something. That's, you know, I'm, I'm bringing up an example of a movie because that seems trivial relative to somebody dying, but actually it's part of the human experience.

And so when it comes to process for learning all this stuff, it's, um, It starts with the hunger, but then also I think that, um, you have to find out what works for you. So for example, once I learned my kids, two of my kids are dyslexic. Uh, I realized like, Oh, I was probably dyslexic when I was a kid. And, um, I love audio books and I use all these speech to text or Texas speech tools.

So there's a tool called Speechify for example. And sometimes it's, for example, like I do a bunch of audio books and podcasts, and I do them really, really fast and I've always really liked lectures. And so that's the way I learn and other people. That's a good example. Tyler Cohen has an amazing podcast.

He doesn't even listen to the podcast. He only reads the transcript because he reads, I think, five times faster than I do probably. Uh, but my audio, uh, like two X, three X speed of speech of Texas speech is probably, um, It's probably as fast as that, um, that's probably wrong. He's a genius. So I don't think I'm a genius.

Um, and so part of it is, is learning what tools work well for you. And so it'd be really basic things in school, like, uh, sit in the front of the class, cause that keeps me captivated and I want to pay attention. Um, I think I'm a very bad studier. Um, and so the fact that I learned so much from listening and thinking in a class compensated for that kind of study, I want to highlight people are different.

So maybe it's a matter of putting in the reps to be able to get it done and the hard work. Is really the root of, of all of this. Now it comes to, day-to-day what I actually do. Tactically. I spent some time on Twitter to find links. I talked to different people and, uh, have a few subscriptions to be able to read.

But then I think another thing that I think I do that other people don't is, um, there's this fearless. Attack of a problem where you can solve it. And so the is actually a computer science analogy. So the way normal people think about software is the way, um, it's like this black box and I don't know what's in it.

And I'm kind of afraid. Like I don't want to click the wrong thing and software engineers, cause they've opened the black box and maybe built it. They know how to deal with that. And so they dive right in. But actually when you go to software engineers and say, Hey, you want to look at this machine learning product.

And they're like, Ooh, I don't know. This is a black box over there. And despite having open one before, like, I don't know if I can figure that out. And so I've learned enough about different things to know that you can just dive in and get it. And I see this all the time and just what I think the difference is that you try to figure out the math for yourself.

So a good example, I think recently with COVID is, you know, I heard the stats, uh, black and Brown people are three times. But like as effected as, uh, others in the country. And I'm like, that doesn't sound right, but that sounds very wrong. And I looked at the numbers and it's not true. There's some kind of normalization of data that makes that true.

But actually, if you just look at it, the, uh, share of a population versus the number people dead, uh, The 17%, fewer black people died than white people. And the reason is why people are old. So if they have a disease that affects more old people, they're going to die at a higher rate. And this is an example of something that, that you're fed a narrative and you're like, wait a second.

That does not up. Why don't actually look up the numbers. You just Google it and you look it up. Another example is landfills where there's all these physical intuition that you can build for things like, uh, You know, people from Europe will get this, but maybe Americans don't typically get a liter of water as a kilogram.

And so a, a cubic meter of water is a ton. And this sounds trivial until you hear a story, like there's a hundred billion, tons of e-waste being generated every year. That sounds bad, right. It sounds really bad. Uh, and first off is a true, uh, like you can go and verify that. Secondly, ask a basic volume question.

How much space does that take up? And if you look at the way we talk about landfills and trash, it does not match up with. The volume of the earth. Like if you like the entire trash output of United States for a hundred years, wouldn't fit a fraction of Nevada. Like you can just make a few square miles of dump and it would be the whole country.

Like, so people have this. No intuition for these things. Cause they don't actually look it up. And so when it comes to, you know, how the brain works or genetics or self-driving or software or machine learning or all these things, you just have to apply this first principles, attitude to everything where you just think like, well, how does this system work?

What is, what are the physics of it? Like what are some basic numbers? And really it's not even complex math, you know, what is the ratio between weight and volume? Like roughly. Like, and I bring up the example of water because water is pretty heavy. And so far to think of e-waste, let's say it's twice as heavy as water.

There's something like that. Then you can just immediately map, uh, uh, uh, weight to, uh, area and then map that to a city and then think physically what's actually going on there. Um, and so I think that's kind of basic numeracy is definitely a part that's missing and I'll have to think about all the time with my kids.

And we talk about things and it's with kids it starts out with, Hey, how much do you think these groceries cost? You're in a store, like how much is it? And first off, it's a way to learn that, you know, not free and things are not free. Um, and you can think about things like, Hey, this, uh, Uber driver or this checkout clerk at the grocery store or this accountant, or this lawyer, how much do they get paid an hour?

So how many hours would I have to work in order to buy groceries or pay rent or do all his basic math. And that's where it starts as far as billing and tuition for how the world works. Um, in terms of specific processes though, I don't, I don't know, um, more that I can give on advice, uh, cause I don't think I do.

I don't actually feel very productive. So that's the problem. Like I think I, uh, accumulate way too much to do, uh, in terms of my. Interests. And, um, there's this sense of that I'm not getting enough done. And especially when there are geniuses that are so public, it's like Tyler Cohen. I bring him up because I love his work.

He's incredibly, um, prolific in the work that he does. And he she's definitely smarter than me. So it was one of those things where like, I'm comparing myself against probably one of the smartest people in the world. And so I always feel like I'm not enough, but I think that that sense of where you are in the absolute sense shouldn't affect.

How you go about problems, how to solve them. I have, I have a, uh, uh, a follow-up question to this. My question would be, how do you instill this thirst for knowledge, that seems to be intrinsic to you. How do you instill that in others? If they don't really feel like learning and wanting to, you know, gain skills.

Yeah, I think it goes back to purpose. And so I started having kids pretty young. And so when I think of, when I talk about kids, I'm not talking about my two year old, which is funny when my peers are having newborn kids and they think everything's so cute. And it's like, I remember that from a decade ago.

Uh, so I was 23 when I got married in 25 when I had my first kid and I have three kids, seven, 10, and 13. And so I have a 13 year old and that means you're dealing with like a grumpy teenager sometimes. And the idea that a lot of people get wrong about kids is that somehow you can map what you want to.

Them and they are not a clone of you and you can't push what you want on them. That's ridiculous. What I can do is help, um, explain. Almost by just example. I actually asked my kid this the other day. And so I have a direct example. It's like, Hey son, what inspires you? And he's like, well, you know, the way you do work is inspiring.

He actually answered that. Like just my, uh, he answered an modeling answer, which is to say that what I do, uh, maps onto him. And so. I would say similarly, when I see Elon Musk or the amazing founders that I talked to, or everyone else in tech, and I've mentioned Tyler Cohen a dozen times when I see them, I think I want to be like that.

And, uh, so I think our media really, really matters. And I think to a certain extent, we have this idiocrasy of, um, terrible people that should not be paid attention to from. Uh, Trump to Cardi B just is just, he's fucking morons that get in our media and we need to, uh, sweep them out of the way and get people that give a shit that are trying to change the world and get them front and center.

And, you know, I'm just tired of the CSEC is nerdy and awkward jokes. It's like, he's a fucking genius that has built a trillion dollar company. How about we give them some respect and look about how he did that and model it and yeah, maybe be more personable or whatever, but like how he drinks a sip of water, uh, hearing is irrelevant and.

So I think our heroes matter quite a bit and I wish our media was better. And so when I think about the books that I liked, like the Martian is really good as an example of digging in and solving a problem, like science, the shit out of it, it's an amazing line, Apollo 13 and then whatever I like scifi for my whole life.

So I think it's a, there's a causal side to this too, where why sci-fi good is because you see a slice of the future and you want to go and build it, and it's just inherently exciting. And when you see a space elevator or, uh, you know, seeing meeting some other. Race from another planet though, would just be, that'd be nuts.

Like it would change everything. Uh, and so there's just a very, very strong reason to go and do this. And I think that intrinsic motivation is what pulls me, uh, in, in how I petroleums. Yeah. I mean, that is the perfect, um, time and a sentence to finish this, uh, beautiful conversation. Off off, is it because I'm like that sentence didn't make any sense?

I think, you know what I mean? Yeah. We're good to go. Uh, I think, uh, thank you so much for, uh, for having me. This is definitely for sure. Ivan, thank you so much. And, um, talk to you soon. Um, thanks again. Alright, thanks. Cheers. Thanks. Thanks so much for listening. If you liked the podcast, please subscribe and leave a review on iTunes or Spotify.

And share the episode with someone, you know, it really helped me out a ton new podcasts coming out every Monday. See you next week.