<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Aarthi and Sriram Show]]></title><description><![CDATA[Long form conversations with people who have "made" it to the inside. Subscribe for a weekly episode from leading CEOs, athletes, entertainers and everyone from the weird to the wonderful. Hosted by Aarthi Ramamurthy and Sriram Krishnan. ]]></description><link>https://www.aarthiandsriram.com</link><generator>Substack</generator><lastBuildDate>Thu, 30 Apr 2026 16:44:15 GMT</lastBuildDate><atom:link href="https://www.aarthiandsriram.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Aarthi & Sriram's Podcast]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[aarthisriramshow@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[aarthisriramshow@substack.com]]></itunes:email><itunes:name><![CDATA[The Aarthi and Sriram Show]]></itunes:name></itunes:owner><itunes:author><![CDATA[The Aarthi and Sriram Show]]></itunes:author><googleplay:owner><![CDATA[aarthisriramshow@substack.com]]></googleplay:owner><googleplay:email><![CDATA[aarthisriramshow@substack.com]]></googleplay:email><googleplay:author><![CDATA[The Aarthi and Sriram Show]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[EP 93 Aidan Gomez of Cohere on AI in the enterprise, what's next in AI and more.]]></title><description><![CDATA[Aiden Gomez , co-founder and CEO of Cohere talks about his journey to deep learning, being part of the original attention paper, how Cohere is tackling AI adoption in the enterprise and much more.]]></description><link>https://www.aarthiandsriram.com/p/ep-93-aidan-gomez-of-cohere-on-ai</link><guid isPermaLink="false">https://www.aarthiandsriram.com/p/ep-93-aidan-gomez-of-cohere-on-ai</guid><dc:creator><![CDATA[The Aarthi and Sriram Show]]></dc:creator><pubDate>Thu, 26 Dec 2024 10:46:13 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/1oatLqWD0fs" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Aiden Gomez , co-founder and CEO of Cohere talks about his journey to deep learning, being part of the original attention paper, how Cohere is tackling AI adoption in the enterprise and much more.</p><div id="youtube2-1oatLqWD0fs" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;1oatLqWD0fs&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/1oatLqWD0fs?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div>]]></content:encoded></item><item><title><![CDATA[EP 92: Mark Pincus on politics,Zynga and much more]]></title><description><![CDATA[I don&#8217;t think we have ever had a guest who was just so raw and honest and cut loose (just see the first 10 seconds of the video below).]]></description><link>https://www.aarthiandsriram.com/p/ep-92-mark-pincus-on-politics-getting</link><guid isPermaLink="false">https://www.aarthiandsriram.com/p/ep-92-mark-pincus-on-politics-getting</guid><dc:creator><![CDATA[The Aarthi and Sriram Show]]></dc:creator><pubDate>Sun, 01 Dec 2024 21:13:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/tLLeXGvoJUc" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I don&#8217;t think we have ever had a guest who was just so raw and honest and cut loose (just see the first 10 seconds of the video below). Mark Pincus who&#8217;s part of Silicon Valley royalty for his work with Zynga, being involved with so many early stage companies and more came on our show and was - to put it bluntly - extremely honest on direct.<br><br>We covered a lot of ground. Politics and media and his journey. Zynga. Product thinking. How metrics and PM culture spread from Zynga. And much more.</p><p>This was a blast.</p><p></p><div id="youtube2-tLLeXGvoJUc" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;tLLeXGvoJUc&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/tLLeXGvoJUc?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div>]]></content:encoded></item><item><title><![CDATA[EP 91: Ishan Mukherjee and Rox - from being an outsider to building companies in Silicon Valley]]></title><description><![CDATA[We got to meet Ishan Mukherjee a little while ago and were immediately struck by the obvious (highly accomplished, pedigree of building and running products are scale) but also the personal similarities in our story from India.]]></description><link>https://www.aarthiandsriram.com/p/ep-91-ishan-mukherjee-and-rox-from</link><guid isPermaLink="false">https://www.aarthiandsriram.com/p/ep-91-ishan-mukherjee-and-rox-from</guid><dc:creator><![CDATA[The Aarthi and Sriram Show]]></dc:creator><pubDate>Tue, 19 Nov 2024 16:12:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/MxXj-ADr520" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We got to meet Ishan Mukherjee a little while ago and were immediately struck by the obvious (highly accomplished, pedigree of building and running products are scale) but also the personal similarities in our story from India. <br><br>This episode covers a lot of ground - his story, how education and family really shapes someone&#8217;s upbringing. We algso enjoyed getting into specific Indian sub cultures like the ones around &#8220;quizzing&#8221; (a big part of my upbringing).</p><p>And last but not least, we talk about his new company Rox and what they&#8217;re doing around sales and AI agents.</p><p>This was a lot of fun - enjoy!</p><p></p><p></p><div id="youtube2-MxXj-ADr520" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;MxXj-ADr520&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/MxXj-ADr520?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><a href="https://www.youtube.com/watch?v=MxXj-ADr520&amp;t=184s">3:04</a> Ishan's Story<br><a href="https://www.youtube.com/watch?v=MxXj-ADr520&amp;t=283s">4:43</a> Inside IIT: India's Most Competitive Universities<br><a href="https://www.youtube.com/watch?v=MxXj-ADr520&amp;t=410s">6:50</a> How Quiz Championships Shaped a Tech Founder<br><a href="https://www.youtube.com/watch?v=MxXj-ADr520&amp;t=705s">11:45</a> Moving to the US <br><a href="https://www.youtube.com/watch?v=MxXj-ADr520&amp;t=1178s">19:38</a> Building Kiva - The Early Days <br><a href="https://www.youtube.com/watch?v=MxXj-ADr520&amp;t=1485s">24:45</a> The Amazon Acquisition<br><a href="https://www.youtube.com/watch?v=MxXj-ADr520&amp;t=1710s">28:30</a> Immigration <br><a href="https://www.youtube.com/watch?v=MxXj-ADr520&amp;t=2095s">34:55</a> What is ROX<br><a href="https://www.youtube.com/watch?v=MxXj-ADr520&amp;t=2220s">37:00</a> Where are AI agents lacking/ State of LLMs<br><a href="https://www.youtube.com/watch?v=MxXj-ADr520&amp;t=2495s">41:35</a> Reinventing CRMs<br><a href="https://www.youtube.com/watch?v=MxXj-ADr520&amp;t=2625s">43:45</a> How ROX works<br><a href="https://www.youtube.com/watch?v=MxXj-ADr520&amp;t=2910s">48:30</a> AI's Impact on Traditional Industries<br><a href="https://www.youtube.com/watch?v=MxXj-ADr520&amp;t=3225s">53:45</a> The Launch: ROX Goes Public<br><a href="https://www.youtube.com/watch?v=MxXj-ADr520&amp;t=3420s">57:00</a> Advice to Young Dreamers<br><a href="https://www.youtube.com/watch?v=MxXj-ADr520&amp;t=3515s">58:35</a> The Ultimate Quiz Showdown<br><a href="https://www.youtube.com/watch?v=MxXj-ADr520&amp;t=3690s">1:01:30</a> Outro</p>]]></content:encoded></item><item><title><![CDATA[EP 90: On Elon and DOGE]]></title><description><![CDATA[We cover how Elon might approach DOGE from what we&#8217;ve seen at X.]]></description><link>https://www.aarthiandsriram.com/p/ep-90-on-elon-and-doge-skilled-immigration</link><guid isPermaLink="false">https://www.aarthiandsriram.com/p/ep-90-on-elon-and-doge-skilled-immigration</guid><dc:creator><![CDATA[The Aarthi and Sriram Show]]></dc:creator><pubDate>Mon, 18 Nov 2024 10:51:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/EHUm6EB6S-E" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We cover how Elon might approach DOGE from what we&#8217;ve seen at X. <br><br></p><div id="youtube2-EHUm6EB6S-E" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;EHUm6EB6S-E&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/EHUm6EB6S-E?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[EP 89 Understanding Anil Varanasi : reinventing networking, spotting talent, design and AI and a surprise episode takeover.]]></title><description><![CDATA[Understanding the endlessly fascinating Anil Varanasi reminds me of nesting Russian Matryoshka dolls.]]></description><link>https://www.aarthiandsriram.com/p/ep-89-understanding-anil-varanasi</link><guid isPermaLink="false">https://www.aarthiandsriram.com/p/ep-89-understanding-anil-varanasi</guid><dc:creator><![CDATA[The Aarthi and Sriram Show]]></dc:creator><pubDate>Sun, 03 Nov 2024 20:59:45 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/WAs-wlfzNoE" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Understanding the endlessly fascinating Anil Varanasi reminds me of nesting Russian Matryoshka dolls. You <em>think</em> you understand him: a successful co-founder with his brother of a leading networking infrastructure company, Meter.  But as someone who has gotten to know him over the years, I know there&#8217;s a <em>lot</em> more to Anil. <br><br>Look a little deeper - open the first doll -  and you quickly run into what can only be described as mysteries.</p><p>Why does a networking infrastructure company  obsess over design? Why have they built maybe one of the more interesting user experiences around LLMs? <em>(he gives us a screen-shared demo, it&#8217;s fun - trust me)</em>.<br><br>Open up one more doll.</p><p>Why do Anil and his brother have a reputation for helping undiscovered talent with a cold email and funding them?  A question many from Tyler Cowen, Sam Hinkie and others urged me to ask him about.<br><br>There are many who thank Anil and his brother but will mention one person who has credited them in public: Dwarkesh Patel (<em>Anil gives us their framework which is elegant and simple and heart-warming)</em></p><p>One more doll.</p><p>You find Anil and Sunil have a remarkable childhood story in how their parents handled them. From there, the montage sequence of their journey into entrepreneurship is filled with many a daring adventure - including flying to China to deeply understand production and sleeping on factory floors. <em>(Anil tells us this story and breaks down how he thinks of China and the US now)</em>.</p><p>And more dolls to go.</p><p>This conversation was a fascinating journey across a broad range of topics: from the future of networking to AI&#8217;s user experience to how great talent looks &#8230;.to when Anil turns the tables on us and takes over the episode and starts asking us questions. (<em>Anil surprised us and we went deep - from a rant on why Knives Out 2 Glass Onion is one of the worst movies of recent time to Indian food in London)</em><br><br>Enjoy!</p><p></p><p><strong>Listen: <a href="https://podcasts.apple.com/us/podcast/the-aarthi-and-sriram-show/id1624345213?i=1000675533301">Apple</a></strong></p><p><strong>Watch:</strong> </p><div id="youtube2-WAs-wlfzNoE" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;WAs-wlfzNoE&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/WAs-wlfzNoE?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p><p><strong>Chapters:<br><br>0:00 - Intro&nbsp;</strong></p><p><strong>3:48 - Anil&#8217;s early years and background<br>5:23 - Unconventional parenting</strong></p><p><strong>9:35 - Anil's journey to entrepreneurship</strong></p><p><strong>12:30 - Sleeping in factories in China</strong></p><p><strong>15:22 - China VS U.S.</strong></p><p><strong>18:30 - Why Networks are so important</strong></p><p><strong>21:35 - Why networking is still an unsolved problem<br>24:10 - Is hardware too hard?</strong></p><p><strong>26:11 - What does Meter do?&nbsp;</strong></p><p><strong>37:17 - How does Meter work?&nbsp;</strong></p><p><strong>41:08 - Future of enterprise software&nbsp;</strong></p><p><strong>44:00 - Human interaction with AI models<br>46:30 - Why Meter is building AI models&nbsp;</strong></p><p><strong>50:50 - Spotting young talent</strong></p><p><strong>54:00 - Anil's framework to find good talent&nbsp;</strong></p><p><strong>57:30 - How Anil helped Dwarkesh Patel start his podcast</strong></p><p><strong>1:02:00 - The &#8220;X factor&#8221; in Anil&#8217;s investments</strong></p><p><strong>1:02:00 - Raising the ambition bar</strong></p><p><strong>1:06:55 - Escaping the competitive Indian dynamics</strong></p><p><strong>1:08:38 - How cinema influences entrepreneurship</strong></p><p><strong>1:17:25 - Why don't we know how planes fly&nbsp;</strong></p><p><strong>1:19:20 - Lessons from Sam Hinkie</strong></p><p><strong>1:21:04 - Kindness as an operating principle</strong></p><p><strong>1:22:10 - Why hasn&#8217;t Anil had a more public brand?</strong></p><p><strong>1:24:03 - US Immigration</strong></p><p><strong>1:28:00 - Aarthi, Sriram and Anil show?&nbsp;</strong></p><p><strong>1:30:44 - Best Indian restaurant in London&nbsp;</strong></p><p><strong>1:32:50 - Has sneaker culture peaked?</strong></p><p><strong>1:34:25 - Why don&#8217;t wealthy people build monuments anymore?</strong></p><p><strong>1:38:04 - London&#8217;s rich history</strong></p><p><strong>1:40:30 - Why does Sriram have sriramk.eth?&nbsp;</strong></p><p><strong>1:42:00 - Should all startups go direct on comms?</strong></p><p><strong>1:47:07 - Are Aarthi and Sriram &#8220;too online&#8221;?</strong></p><p><strong>1:49:10 - Sriram&#8217;s Silicon Valley groupchats</strong></p><p><strong>1:49:46 - Will Aarthi and Sriram move back to India?</strong></p><p><strong>1:48:12 - Aarthi and Sriram&#8217;s failures in tech</strong></p><p><strong>1:53:55 - Netflix&#8217;s 3D and streaming software</strong></p><p><strong>1:58:18 - Popfly&nbsp;</strong></p><p><strong>1:59:55 - Microsoft success under Satya<br>2:02:00 - On tech execs&nbsp;</strong></p><p><strong>2:03:10 - Nonfiction book that Aarthi and Sriram would write<br>2:06:27 - Aarthi and Sriram&#8217;s favorite Indian movie before 2000</strong></p><p><strong>2:09:48 - The End</strong></p><p>Sriram: Ladies and gentlemen, we have an exciting episode for you here today and someone you may not have heard from or seen before and trust me That's going to change starting today and you're going to see this person everywhere. You know It's as Paul Heyman says in WWE. That's not a prediction. That's a spoiler. We have the one and only Anil Varanasi.</p><p>Now, Anil is the founder along with his brother, they have built one of the most exciting networking companies and that might sound like an oxymoron, but I was going to talk to you about why networking is the sexiest thing out there. But this is not just a conversation with somebody in Silicon Valley who has built an interesting company. There's definitely a part of it, but I know is one of the most thoughtful and helpful people you can meet. And when I was preparing for this episode, I spoke to so many people, very well known figures, Silicon Valley, who had stories and questions and anecdotes about another, which you want to get into.</p><p>And he is incredibly hard and elusive to pin down for this conversation. And. I remember talking to him about five years ago to do this. And so we finally made this happen after five years. Anil Varanasi. Welcome to the show.&nbsp;</p><p>Aarthi: Welcome, Welcome anil&nbsp;</p><p>Anil: Thank you so much for having me&nbsp;</p><p>Sriram: All right. Okay. So we're going to get into networking and why networking may be some of the most interesting piece of software to be built out there, but I also think we're going to talk about how you guys at the cutting edge of how user interface with AI is going to work. There's a lot of fun stuff there I think you're one of the most thoughtful piece on talent which I want to get into But since it's probably the first time we're having this conversation, I want to get into your origin story. So tell us a little bit about you, your brother, where you grew up and how you got here in the first place.</p><p>Anil: Yeah, my brother and I grew up in Hyderabad until I was 12. He was 14 and then our family moved to Northern Virginia. Seemingly, for all the random reasons, immigrants move a certain place, which is they had friends there. And that's like the biggest reason. But probably the fortuitous thing that happened to us in Northern Virginia is one, we were next to this eclectic school called George Mason, and I'm happy to go into the history of it and why GMU is so interesting.</p><p>And then two is HHMI Genelia being like 20 minutes from where we grew up. And then three, about 70 to 74 percent of all internet traffic goes to Northern Virginia for again, for a lot of different reasons. So in India and Hyderabad, the Northern Virginia, we went to college there, we started a company there.</p><p>But I feel incredibly fortunate to have landed there some sort of like Mayflower Plymouth thing for us, but in Northern Virginia.&nbsp;</p><p>Sriram: So I'm curious to talk this a little bit. Your parents and your upbringing because you folks had, I would say, an interesting childhood, and I think your parents are, treated you a little differently.</p><p>There are a lot of parents with their kids.&nbsp;</p><p>Anil: Yeah, we were fortunate on a few things. One, our parents were busy themselves. They had actual careers. They were super busy, so we didn't have any of the helicopter parenting and things like that. But I think. A couple of things beyond that, where they were very open to and pushed towards is what one of our common friends Sriram calls free range kids, which is Hinky's line.</p><p>I think we were basically allowed to do anything possible as long as we didn't hurt each other or somebody else growing up. And I think especially you guys grew up in India, like me You do just get to go and do whatever you want all the time. There are no restrictions, but I think that kind of seeped over even when we moved to America, and that was incredibly fortunate for us.</p><p>But the other thing I think they did that was very deliberate is since we were probably since I was like 10 or 11 years old, I was always treated like an adult, which was very different than when I went over to my friend's house for dinner or something like that. Compared to how their parents treated us and treated them versus how my parents treated us when my parents had dinner guests over.</p><p>We were part of the table. We argued books. One of my dad's cousins is a professor, and he tells us like, endearing story of him mentioning something. And I. Forget if it was like chemistry or biochemistry or something, and suddenly the four of us just left the dinner table because we brought our own books to site saying this is the reason why I am right for all four of us.</p><p>And none of us agreed with each other, which is also interesting. But I think we were always treated like adults. We were given any free reign to do. I feel like decisions that were made were okay for us to make. It didn't have to be that an adult had to make a decision. People become adults much later than they used to.</p><p>Sriram: Okay, can I ask you, what is the defining line or behavior that you think makes someone an adult versus a child? Because I've heard this before, and I think about, for example how someone like a Walt Disney was able to tap into his spirit of childhood for so long, or if you look at Elon, there are so many things that he does, which are childlike.</p><p>And yeah, and I think that is a part of modern life where, our life expectancy is probably about like 50, 60 years over what it was a thousand years ago. We have access to modern medical care. We live longer. And I wonder whether the, what makes an adult versus not is just fundamentally different than what it used to be.</p><p>Anil: Yeah. But before we get into it in the United States, average life expectancy is going down, not up. And we can talk about it separately. Oh, wow. Fact&nbsp;</p><p>Sriram: check. Okay. Okay. Hold on. Look like we, the vice presidential debate a couple of weeks ago, no one's allowed to fact check on,&nbsp;</p><p>Anil: maybe like to directly answer your question.</p><p>I think of that as like something very simple, which is being okay with the consequences of your decisions. And that feels like culturally is pushed much later. Society makes decisions for you. Your parents make decisions for you. Culture make decisions for you. And by the time you make decisions, you own up to yourself.</p><p>That seems like it's much later. I've been tracking literature. We don't have any longitudinal studies that I trust just yet. But it just feels that way. And I think, one of the few places in the world that's not true. Is India still feels like it's good. People just go do stuff, and they're younger.</p><p>And I think that's like bright for India because of that.&nbsp;</p><p>Sriram: Okay. So on the theme of doing stuff when you're younger, I want to come back and tie this back to you and your brother. Because you get raised in Indian origin household. But you go off you folks are actually quite entrepreneurial.</p><p>&nbsp;So could you just walk me through that journey? Because you guys have done some very interesting things, which I really want to get into.&nbsp;</p><p>Anil: Yeah, I think our first love was actually film. And we spent a lot of time making a lot of film. And we got really into it. Like down to the fact that we had, a crew of 80s and there's a lot of things we were doing.</p><p>&nbsp;And at one point, we were deciding if we should. Make film forever. We should actually build all the stuff we want in technology or not. And our calculus was this technology has a half life storytelling doesn't and we can do that later. Probably it's I think I'd give majority of the credit to my brother.</p><p>&nbsp;He's arguably, One of the smartest people I've ever met, but incredibly kind down to the fact that if there were two of us, like there were two of me, one of us wouldn't have survived. It's that good that he's around. And we've now been working together for 20 years, and I usually joke around that.</p><p>&nbsp;Other than two prisoners that have shared a cell. There's not that many people that have spent more time together than me and my brother. And I think what kind of started is he's just like more of a tinker. And I think his greatest gift is the fact that he can just pick up and learn things on the fly.</p><p>&nbsp;So we ended up building all sorts of things from electronics to software to film to music. And once we really got into film, we started really thinking about what is physical theatre design look like, and we started building those. Then I think we really got enamored with software obviously around like maybe 2003, 2004, mainly because of a lot of things we wanted to do in film needed better software.</p><p>&nbsp;And you make your own plugins and Maya and shake and all those things that I remember back in the day. So it started off that way. Then, I think, part of the entrepreneurial stuff that you're mentioning is in Northern Virginia, something like 40 to 50 percent of all households in Fairfax County and Loudoun County which are the two of the highest per capita income counties in the country, at least one income in a household comes from working in the government.</p><p>&nbsp;And so we were no different kind of group. Growing up and hearing a lot of these conversations. And so we just started doing small RFPs for the local government. And I just describe it as a video game. That's like an MMO. You start beating a small boss and you beat a bigger boss that's kind how it started.&nbsp;</p><p>&nbsp;Sriram: There was a time when both of you shipped yourself over to China and spent a lot of time in factories over there. Why, what happened and what you learned from that.&nbsp;</p><p>&nbsp;Anil: So we deliberately wanted to start meter and we were trying to figure out how the city of 800, 000 people figured out how to build products for 8 billion people.</p><p>&nbsp;So we're like, is it in the water is in the air, whatever. So one Christmas, we just decided we're just gonna move to San Francisco. We get to SF, we didn't know anybody here, I think both of one of the greatest things about San Francisco, especially a decade ago, was that you could literally cold email anyone, and the word literally in the actual sense, not in people would respond, people would actually just respond and say, what's up, what do you need We had a deliberate intention to start meter.</p><p>&nbsp;We came to San Francisco. We met a bunch of people that today we end up working with. And it's an incredible thing that all of us were, all of them were just open to talking to us. But as we started meter, for context, meter is a hardware software operations company. We're entirely vertically integrated.</p><p>&nbsp;What we do is make it easy for anybody to get internet infrastructure. And part of that, what we build is routing, switching, wireless hardware. And as we came to SF and we tried to learn how other people build things at scale. Then we just said, okay, it's time to just get started on meter and we started building a lot of the operating systems and front.</p><p>&nbsp;One of the fundamental bottlenecks was the fact that, you design a PCB or design a new piece of hardware, and it would take months to come back from Asia. And, we really just wanted to circumvent in short circuit the amount of time it would take and We looked around and said where's the best place in the world for hardware?</p><p>&nbsp;And especially in 2014, Shenzhen was an incredibly great place. It had gone from 30, 000 to 30 million people in 30 years. And the majority of Shenzhen at that time was actually under the age of 40. You couldn't even see an older person while walking around. So we ended up going to Shenzhen to learn how to build hardware.</p><p>&nbsp;And it was very similar to what we did in San Francisco. We just showed up. We didn't know anybody there. We had never been there. We didn't speak the language. Later on, I ended up picking up Mandarin after living there. Both of us were vegetarian at the time. And it's the best diet program I ever recommend going to Shenzhen.</p><p>&nbsp;I think we ended up losing like 15 pounds each. And then it was just like starting to build hardware for meters. So we ended up cold emailing a bunch of factories and manufacturers and designers to just start building it out. So that's how we ended up in Shenzhen.&nbsp;</p><p>&nbsp;Sriram: Now, this is a very different era, right?</p><p>&nbsp;Obviously, we are in 2024 we are talking in the time of All these real geopolitical tensions between the US and China, but that was a very different era. And I'm curious, what did you observe or learn about how that ecosystem worked, which you think has impacted you later? And there's a second part of this question, because there's been a theme recently that it is going to be hard to replicate some of that culture, that 966 or the work ethic in some way in Western countries.</p><p>&nbsp;I don't know how to get your reaction to that, but also, where do you see, where do you observe that? Maybe, shapes what you did today.&nbsp;</p><p>&nbsp;Anil: I tend to believe like people in China in individual sense are just incredible. In aggregate I think, there's a lot of issues geopolitically, like you mentioned, culturally, if you went anywhere else in 2014 or 2015, rest of the world, everybody wanted to be America, right?</p><p>&nbsp;In the way they spoke, the way they dressed, what music, what film. Everything was how to be America. But China, what it felt different was that they wanted to beat America rather than be America. And that felt like a very stark difference. They felt like they had earned the right to compete with the largest economy.</p><p>&nbsp;And you could feel that energy and fervor. We just want to build things. We know we can build things and we can do it at scale. And in Shenzhen, that culture was ever present. You design something in the morning. And then by night, somebody would run over to your lab saying, it's produced. Do you want to test it?</p><p>&nbsp;It was just beautiful at that time. Especially if anybody, either of you have been to Huaqiangbei incredible place, it's like the electronics market in the world. You can literally take a bucket around and buy like memory full of buckets and cameras and other things. It's like a different place.</p><p>&nbsp;It's cyber punk to the core. But that, Culture was seeped in very deeply in working really hard, but also I think Shenzhen at that time, because like I was mentioning it, it was young people. They were all working really hard to send money back to families that lived elsewhere in China. I think that was also critical thing.</p><p>&nbsp;Like you actually had a mission, which is you're doing it for your family. That's like the most powerful thing possible. And you had this country that was just getting out and actually having world class level. For our incomes and other things, and I'm not sure about not replicating it. I do think like SF has a lot of that in pockets.</p><p>&nbsp;I think India now has that to talking to folks in Africa. It feels like that fervor is there. I haven't been to China since the beginning of the pandemic, so I don't know how it's changed in the last four years, but it feels like other places have that still.&nbsp;</p><p>&nbsp;Sriram: I'm curious. I think I wanted to ask you this.</p><p>&nbsp;When you start talking about meter, which is one of the things really fascinates me about what you have done with meter, which we're going to get a bit later is let's be blunt. Networking has never been considered the sexiest, most glamorous sector, okay? There have been these waves of technology.</p><p>&nbsp;Right now it's AI. You go back several years, it's it's the on demand economy. You get Solomo. You go through all of that. And, in our adult lifetime, nowhere was, the time of the year in networking. Now, of course, I'm saying this facetiously because there are these incredible companies.</p><p>&nbsp;Miraki, et cetera, which have really pushed the state of the art on networking. What made you and your brother, you come to San Francisco, you're part of the action. I'm pretty sure people were not talking about how to build networking companies. They were probably talking about Mark Zuckerberg or who are the hot person of the time was.</p><p>&nbsp;What made you want to go focus on this?&nbsp;</p><p>&nbsp;Anil: I definitely think most people thought, what are you even talking about? I think when we said networking, they were talking about like social networking. I don't think they thought about actual physical computer networking but a few things that we really care about in our work.</p><p>&nbsp;And I hope it's part of my work for the rest of my life, which is one. I think internet is one of the best forces of good we've ever created. We can argue about whether it's in like the top 10 or the top 50 and where is it compared to penicillin and other things, but internet is just good. Just phenomenal.</p><p>&nbsp;Second we believe we all will use internet more than we currently do, which also felt like a very safe bet to make. And three, nobody else was working on it. Like you said at one point, Cisco was the largest company in the world. And after that, it just felt like nobody was really learning it and working at it.</p><p>&nbsp;I was recently mentioning, today if you study computer science at some of the best computer science schools in the United States, I'm not sure what it's like in other places. But, you have one distributed systems class, and maybe networking is covered for a week.</p><p>&nbsp;Which is absolutely insane and mind boggling to us. Because everything in the world is packets. Everything. This call that we're doing right now, if somebody's doing WhatsApp, self driving cars, you mentioned building models and AI, inference and training is entirely networking and it's bottlenecked by networking.</p><p>&nbsp;We'll go so far as saying, I think with the current trajectory, I am more confident even in that stuff, energy consumption will be solved, but networking has to change before it gets really efficient. It felt like a very serious bottleneck, felt very important. We thought it would just continue to grow and we had this particular unique ability of knowing what new technology was and knowing this old alchemy forgotten thing of building pyramids on how to build networking and clash it together.</p><p>&nbsp;And we had particular opinions on how to do it all the way from the hardware design and up. So that's how we knew that it was right. Yeah,&nbsp;</p><p>&nbsp;Sriram: I want to maybe you try and make this more concrete because I suspect and I've learned a little bit about this and, I feel like I have the benefit of doing some homework, but to a observer there, they might go, okay.</p><p>&nbsp;My wi fi router generally works. I have mesh networking There is an IT team which generally seems to be able to figure it out. Isn't this a solved problem? But I think one, you know I would love to get a sense of what you mean when you say we've got to push this forward And second, I think that one of the most interesting things you've done is how you architected a very A modern forward thinking company, which I want to get to later around it.</p><p>&nbsp;But why isn't networking just a solved problem? We're like, Hey, Wi Fi works in network. Infrastructure works. Why is it not a solved problem?&nbsp;</p><p>&nbsp;Anil: I think there's so many layers to that question, but. Maybe one thing to step back in a larger sense is the fact that it all works as a miracle in the first place, like the fact that the internet works, it's a bunch of just open protocols.</p><p>&nbsp;We all said, we'll agree. And then it'll just work then at each layer of the stack. So if you pick routing on how we do routing down to . There are ways to make Ethernet better, too, that thousands of people are working on as we speak right now. Then how do you do switching? How do you do actual memory and packet movement and packet manipulation?</p><p>&nbsp;Then even on the wireless side, how do you actually do it in different spectrums and the speed you do it, the power you do it? So I do think things work decently well. They're not utterly bad as 10, 15, 20 years ago. But all the infrastructure we have today as well will not scale to where we think it's going to go in the next 5 10 years.</p><p>&nbsp;So every layer of the stack can get better from Wi Fi, wired, cellular, data centers, routing, switching, fiber, all of it can get much better. But I do also expect in the next 5 to 10 years, Protocols will also change and there will be new things that will come out and there will be protocol level advancements too&nbsp;</p><p>&nbsp;Aarthi: why do you think that is you mentioned about like colleges studying, computer science and IT. And having maybe one week, like one chapter, like for me, that's how it was a distributed systems was like, yeah, you took it. But then the ECPIP was like a couple pages. And and that was it, right? Like you just like flip through this one chapter is got networking, you get your credits and you move on.</p><p>&nbsp;Why do you think that is? It's not there are lots of unsexy problems that we study, we focus on, we do work on we would have thought assembly language was it, like this was it, right? Like we would, don't need to innovate on top of it, but we do keep pushing the boundary there. But why not here?</p><p>&nbsp;Why not in terms of network and hardware as such?&nbsp;</p><p>&nbsp;Anil: There are companies that are pushing it, right? We're recording this in October 24.&nbsp;</p><p>&nbsp;Aarthi: Yeah.&nbsp;</p><p>&nbsp;Anil: Broadcom is now larger than Tesla. Yeah.&nbsp;</p><p>&nbsp;Aarthi: Yeah,&nbsp;</p><p>&nbsp;Anil: and there's some incredible companies like Arista to what Andy Bechelstein, them are doing just incredible amount of work to push it.</p><p>&nbsp;But I do think what's happening is one abstractions have gotten too high. It's the same thing as operating systems, right? Yeah. Nobody's really taught operating systems classes anymore because. Everybody that does something new, what's the cycle in the world? So I'll build a new abstraction.</p><p>&nbsp;I'll write a paper about it. Then I'll write a book about it. And if it catches a sufficient amount of fire, that'll become popular, now it'll go into curriculums, and there's only X amount of credits to go fill, and that fills up universities. But if you look at from from the beginning itself, there's a great book on universities called Wisdom's Workshop, that kind of tracks universities from the beginning, and basically the story is, As any field progresses, it's abstractions that become cool because people want to know new stuff and people want to build on stuff.</p><p>&nbsp;I think that's one thing that happened with networking. Second I don't think it's as fast to iterate on networking as it is on an application. Feedback loops are much tighter. If I'm building an app or doing anything, like I can hit compile and I know if I did right or wrong.&nbsp;</p><p>&nbsp;Aarthi: Yeah, and we love the instant gratification.</p><p>&nbsp;Yeah,&nbsp;</p><p>&nbsp;Anil: exactly.&nbsp;</p><p>&nbsp;Sriram: Now, this is, the reason I ask that question is because obviously, these networking companies, you guys, but folks like Broadcom have been around for a very long time Cisco's still a major player, Arista, there's a bunch of others, Palo Alto Networks, folks in security, there's a bunch of others out there.</p><p>&nbsp;But, none of them are going to be the kind of company which takes generative AI and builds a whole new user interface paradigm, which then goes viral on Twitter. Which speaks to me about you guys, I think your personal sensibility, your brother's personal sensibility and the kind of company you've built.</p><p>&nbsp;So could you talk to us a little bit about what you've been doing there and maybe show us because I think what you've done is actually pushed the state of the art of user interface on AI forward. So I want to really get into that maybe with, a quick visual also.&nbsp;</p><p>&nbsp;Anil: Sure. Yeah, I'll try to pull that out.</p><p>&nbsp;But I think there's a couple of things that happened for us. Let's imagine model building and progression just pauses entirely. Something happens. It just pauses. It will not get any better than it currently is to whatever parameter count state of the art count is today. Even with that, it was clear that at some horizon software would become Something that happened on the fly, rather than an artisanal handmade product that it is today.</p><p>&nbsp;I think people forget software is handmade. Literally a handmade product. And anything that's handmade is like fragile, not really scalable, and you can't really customize it for everyone on the fly. So what we've been thinking about is, generally when you build software, you end up building it for a barbell of users.</p><p>&nbsp;You're either building it for the expert or you're building it for the novice and you have to pick and you're taught in product management and other things. Pick your ICP. Focus on that. Don't really do anything else. Just figure out who you're building for. Forget everybody else. And in our business, we power.</p><p>&nbsp;Internet infrastructure for some of the largest companies in the world and some of the smallest and fastest growing ones that might not have any kind of I. T. or networking expertise at all. And it just felt like a Faustian bargain having to pick between one or the other. So models were continuing to get better.</p><p>&nbsp;We had opinions on the fact that, you shouldn't have to choose between experts and novices. And then, particularly in the problems we're trying to solve. We have real time data. We have actual exact use cases. And I generally have been feeling this for the last few years is this trite way of saying it.</p><p>&nbsp;We think software should be soft. Again and the fact that like it should be malleable. And Jeffrey Lidd, bunch of folks have done great work on malleable software. What Brett Victor is doing with Dynamic Land and other things. Thanks. None of those things seem like are coming into software anymore.</p><p>&nbsp;As the tech industry has gotten bigger and really big, three tech companies now are 10 percent of the entire global stock market. It's ludicrous. It's become more cookie cutter and we're just producing assembly line software. And it was incredulous to us on why is it that you had these great models sitting there and software could be better, but nobody was doing anything.</p><p>&nbsp;Like maybe another way of framing this is. If somebody told us something like GPT 4 would be available in 2014 by 2024, we would say the world would have changed entirely. And today, almost nothing has changed. And that's like another curious thing for us. Truly. So what we've been trying to do with the product you're mentioning is called Command.</p><p>&nbsp;And we had three simple goals with it. We're a networking company, and networking and other enterprise products usually end up being about dashboards. You have to build dashboards where people can interact with the infrastructure, make changes, people can look at it, get reports and visuals and all these different things.</p><p>&nbsp;But how networking started was entirely on the command line, and you had to basically learn each different command line. But the awesome thing about the command line was that it's really fast to get information and really fast to take action. With dashboards, you don't have to remember, 100 different commands, but it's really slow.</p><p>&nbsp;And if you wanted a particular feature the way you want it to be, you put in a feature request, some PM will pick it up, then some designer, some engineer, then QA, and by the time you get it, it's three to six months later. So what we wanted to do with command is very simple. We wanted to take the best of the dashboards and best of command line.</p><p>&nbsp;And smash it together. So what you can do, and I'll try to quickly share my screen. So people have a visual of what we're trying to say here. We've been thinking about a particularly entirely new interface for software. Normally our dashboard looks like this, you Beautiful software.</p><p>&nbsp;You can go click through, see different parts of what's happening. but&nbsp;</p><p>&nbsp;Sriram: by the way, this is probably one of the, one of the few times in history where somebody has looked at a networking dashboard and say, ah, the beauty of this dazzles me, but sorry, let's keep going, which is actually, you know what I was, I want to make a point here, which is no offense to some of the other networking companies, but when somebody thinks in large enterprise networking company, they don't think we're going Innovate on user interface paradigms.</p><p>&nbsp;Aarthi: I was going to say that, especially for enterprise companies, having worked in a few dashboards, I think is important. I know there's going to be a lot of other stuff that we wanted to do, but it's, it, to me, it is shocking how little effort and art and craft goes into just showcasing what we are doing and whether we are doing it well, and just like looking at dashboards, and it's almost always this outsourced part of this whole business, but everybody looks at it to me. That is just the most shocking part of it.&nbsp;</p><p>&nbsp;Anil: For sure. And it should be beautiful. Every part of it should be beautiful and beauty isn't just like pixels, but like how it works and kind of thing. So this is what you would expect.</p><p>&nbsp;Dashboards really going into every part of what's going on and all these different things. What we've been thinking about is what if a user wanted something different than this? And what, how can we bring the command line to users in a modern way? And where do we think software is going in general?</p><p>&nbsp;It couldn't be that dashboards are like the pinnacle and the last thing we all build, and this is it. So taking models into consideration, there's a couple of things we wanted to do. One, we wanted to make it really easy for anybody to get information about their networks. So we'll see if this works.</p><p>&nbsp;I'm on our test network. You can do something like,</p><p>&nbsp;and it's networking stuff, but normally people have to go through and look at two hours logs and things like that. And, you'll just get an answer back at a particular time and you don't have to do it. But, one of the things we've been really thinking about is a lot of these chat interfaces they're just text only, why don't we just do other things in line?</p><p>&nbsp;So let me see if I can go to a particular client. We'll see if anybody's on the test network, but what will happen? That's interesting is on the fly command will actually write software just an engineer would. And we wanted the software generation to be in line as well. And it should be like real time software just these components should be fully usable just like any, anything would.</p><p>&nbsp;Yeah. And the other thing we felt was most of these interfaces felt too ephemeral. I did something and it was gone and I didn't have a connection to it at all. And that also felt really weird. Um, there's a bunch of great work that so many people have done over the last four or five decades on how to make software feel more personal.</p><p>&nbsp;And one of the ways software feels very personal is when you get to share it with others. And both of you have worked in social networking. So you understand when people get to share. It feels like there's even more and in particularly networking to one of the ways I grew up learning networking is you were in a lab and when you were doing something, somebody else will peer over your shoulder and say, Have you tried this and tried that and things like that?</p><p>&nbsp;So we said, Can we take those things right? Which is something that isn't ephemeral. What if you could just pick this up and drag it over here? And this should be an entirely multiplayer area. And I think Figma has done an incredible job of building a new paradigm on how performant multiplayer is on the web.</p><p>&nbsp;I don't think they get enough credit. They're an incredible engineering team. But it was also baffling to us as why is that stuff not used normally?&nbsp;</p><p>&nbsp;Aarthi: By everyone. When I saw the demo that you put out on that Twitter, one, it got me very excited, right? It was like, for years and years, like you said, dashboard was it like that was the pinnacle of any sort of innovation with large scale enterprise businesses.</p><p>&nbsp;So one, it was like, okay, this totally changes it. But two, this is going to be the future, right? This is how I think we are going to think about enterprise software. Not just on viewing things, dashboards, that kind of thing, but really how we build software within enterprises, how we want it to be to what you said, software being soft and Mushy and malleable needs to be like Play Doh, right?</p><p>&nbsp;Like you want to be able to do things with it. Your viewpoint software to you is personalized compared to what other people will get to see. And that I think is what we want to be doing. That needs to be the future. It's just that I think seeing the demo that you had was like the first sort of, this thick visibility into that future, which has got me very excited.</p><p>&nbsp;Sriram: Yeah, I think. There's a sense when you're a musical artist that they have influences of, and there's a hit song where this song kind of takes inspiration from. So when I look at this, I see one, obviously how we have built this on top of a bunch of models. And I want to get into that. Is state of the art and nobody has done that before, but I can see the lineage of, I would say, Figma, as you mentioned, the notebook style UI of a Jupiter and iPython and yes, all that comes from, but also and I think you mentioned him, but at Victor and the idea of having these data interfaces, like if you go to his website, he has his like little cars that you can drive around and I see the, like a song, bring the multiple influences you, you bringing this together here talk to us about how this works under the hood, right?</p><p>&nbsp;So when you are writing this piece of code, I assume it's spitting out the version of some really fancy shell script that a a bearded network engineer knows to write.&nbsp;</p><p>&nbsp;Anil: We took a little bit of a different approach. One of the ways that we've been really thinking about is, our ambitions for meter are that.</p><p>&nbsp;If there is a packet moving in the world, we want it to move through meter hardware and software over the next year or two, people will really get to see what that means. And you know how far we're willing to go for every single packet to be ours on. So what we started a couple of years ago and all the credit goes to our engineering team and teams that meter is really thinking about how can we you.</p><p>&nbsp;Virtualize everything down to the port in the backend. And then we have real time data streaming in from every piece of hardware. That's everywhere. That's meter hardware. So counterintuitively when somebody types something, it's actually just getting translated to how that hardware is virtualized.</p><p>&nbsp;And then. These models that we ended up building are just like small models because we believe that in enterprises, you actually don't want creativity. Accuracy matters a lot more, right? Nobody ever said I want my enterprise software to make shit up. That's not how it's going to work. And then so they really know how we write software.</p><p>&nbsp;How we design software, but also how our hardware is built, the architecture and what our backend is like. So you've got models that are writing queries against that, virtual backend that we have. That's where every single port of hardware, then another part of the system that actually writes the software.</p><p>&nbsp;And as it comes in, one of the really interesting parts about the architecture is it's all real time data. Everything that I showed you is literally being pulled in real time from a piece of hardware somewhere in the world. And what we wanted users to be able to do is, one, get information really fast, two, take action really fast, and three, have software written for them just the way they like it.</p><p>&nbsp;And in that canvas I showed, multiple people can work together at the exact same time, you can all pull in different components that are generated for you, and voila, you now have a dashboard that's yours, how you built it. And so our main points were really again pushing towards what does software look like five or 10 years from now?</p><p>&nbsp;And it felt like we can do a lot of it today, or at least get started on it. There's a lot more that we want to do with come in. But it. Like a good start.&nbsp;</p><p>&nbsp;Sriram: Extrapolating a little bit from what you're talking about and what else is happening. I do think so much of SaaS software today is there is an API around maybe a data store or an interface and which requires a bunch of schlepping to use the the Stripe brothers phrase.</p><p>&nbsp;And then you construct, and I don't mean this in it is underplay how much effort it takes, but you build a bunch of knobs and dials on top of it. And that's a lot of the value add what I think you're showing here. And I would say if you look at, say, Claude's artifacts or what GPT is now doing with canvas is a sense of, okay, you can create software on the fly.</p><p>&nbsp;And it just exists only because you asked for that in that particular context. Yeah. Where do you think this heads in terms of user interface design? What does a salesforce. com in a 2027 context, or I don't know, Atlassian or pick your favorite SAAS dashboard. What do you think this all looks like in three, four years down the road?</p><p>&nbsp;Anil: Yeah, I have pretty strong opinions on this, but there's two paths here. One is if. We won't have another order of magnitude improvement in models as far as parameter count, capabilities second is if we do so maybe we can talk about the more exciting one, which is let's say, you said 2027.</p><p>&nbsp;Let's say we get another turn at this and models get 10 X bigger. I just don't see how, unless you control the data and the pipeline yourself. That there's going to be any value for software companies at all. It will feel like there's a blip of 20, 25 years where software reigns Supreme. I just don't see how that's possible anymore.</p><p>&nbsp;Already what's possible with models and other things. If you push it to be 10 X larger. Let's say it's because of synthetic data. It's because of new algorithms or new architecture. Where I see it all going is all three of us are sufficiently old enough. So when we used to sign up for services, uh, that's true, me and you are.</p><p>&nbsp;When we used to sign up for software in 2004, 2005, 2006, they used to ask about who you are. They would say, tell us where you live, what do you like, and all this stuff, and they would try to like, customize it a little bit. But if you see what's happening in software these days, that doesn't happen anymore.</p><p>&nbsp;Nobody actually asks you who you are. And nobody actually tries to figure out what your kind of flavor and profile as a person. I actually think That will come back and all it will take is to just understand a person's preferences and then software is just made for them entirely on the fly for whatever they're trying to do.</p><p>&nbsp;And that could be any sort of thing underneath that could be CRM to HR could be finances anything. I think it will matter more on the user rather than the system of record, which is what it's been the last few years the web and we plan on pushing this too, which is even with command, what we want to do is actually understand the user deeply and put that into context of the models and other things to really take it further.</p><p>&nbsp;But I think that will happen really at a large scale that personalization of software. will actually happen, which we all tried in the early 2000s, that just didn't go anywhere.&nbsp;</p><p>&nbsp;Sriram: Yeah, I, think about AGI. emax. d or vimrc. One of the things when I was watching you today and the demo is it strikes me that we might be on the edge, but still missing the actual metaphor for interacting with these models, because That is the REPL and that is the power of the feedback loop of the REPL and which in this case is also helping in some RLHF style mechanism, tell the model what you're doing and you're working hand in hand, that's in the text interface.</p><p>&nbsp;Now, of course, As I'm saying this, advanced voice mode on GPT 4. 0 just rolled out to everybody. And that's carrying it out in a, like a regular human conversation, but it does strike me as I, I think you're absolutely correct where we could have personalized software, but that is, we still have to figure out what the interface paradigm is where you're interacting with a ridiculously smart person.</p><p>&nbsp;Intelligent piece of model weights, but you and that have to work together. And I think what you've talked, what you built is one of the few things I've seen, which is trying to really push at that which is why I think it's really interesting.&nbsp;</p><p>&nbsp;Anil: You're actually, I think both of you mentioned this, we had the idea for a command in this interface, maybe about a year and a half ago, we had a bunch of work to do on our end about building out the architecture, the data pipelines to do it entirely with real time data.</p><p>&nbsp;And we had assumed somebody else would do it. And we had already been testing this even before artifacts and other things came out. But when we released command, I think one of the most surprising things was, yes, we knew that people would like it because we had been showing it to a lot of people, even folks, we all know that built cutting it software in the world.</p><p>&nbsp;But the amount of kind of adulation command got surprised, even us basically. All the messages we ended up getting is this feels like the future of software. And you guys are a networking company. What the hell are you doing? Inventing this? Those are the two things consistently we heard over and over again, but it has been surprising to me in general with the amount of capital and talent that's been going into this field the last three, four years, how little we have to show for it as far as what could be the future, if you will,&nbsp;</p><p>&nbsp;Sriram: I think that is maybe let's ask you that question.</p><p>&nbsp;You guys are a networking company, right? Like I don't expect you know No offense to say anybody from broadcom watching somebody from broadcom to maybe be up there to win The apple design awards or help, you know bring brett victor's ideas like so Why is meter doing this? And there's a multi multiple, but why are you guys doing this?</p><p>&nbsp;Anil: Alan Kay had this like really great quote that a jobs used to use a lot to which was from a great essay Kay wrote in the late eighties. People that care about software build their own hardware. And we have two versions of that ourselves, which is we believe people that build the hardware should be the ones responsible for it.</p><p>&nbsp;And if you have the hardware. You better make better software than anybody else. And I think we're in this like luxurious position where we control the entire stack ourselves. And we always have and meter is not just a company that builds routing, switching wireless, but we're the company that deploys it and maintains it, too.</p><p>&nbsp;So we deeply feel the pain because if the software sucks or the hardware sucks, we're on the hook on making it better. So I think fundamentally, I think meters in right position because of what we build because of our business model of entirely being vertically integrated. And having this like viewpoint that if you have those things, you can do it.</p><p>&nbsp;This is why you mentioned Musk earlier, why Tesla is literally the only company I think in the world, actually taking action in the real world with models.&nbsp;</p><p>&nbsp;Aarthi: Yeah,&nbsp;</p><p>&nbsp;Anil: there's nobody else,&nbsp;</p><p>&nbsp;Aarthi: especially after their series of demos a couple of days ago, it became even clearer that, you being able to have visibility into the hardware and owning the hardware stack, I think gives you enormous sort of authority and power to be able to build on top of it and go up and down the stack, so to speak.</p><p>&nbsp;Anil: Yeah. And you can control it. So let's say you're trying to build something at the highest level of the stack, but you want all the layers below to change something because it will work better. If you're relying on somebody else, you all know how hard it is to like work with other companies. Forget working in your own company.</p><p>&nbsp;Working with other companies is like two orders of magnitude harder and nothing ever moves. But if you own the thing, you can just like, Change whatever you want from data, fidelity, data, labeling APIs and other things. But I think it comes back to maybe directly answer your question. I just think we need better software in the world.</p><p>&nbsp;Yes. Pixels have gotten better. Design systems have gotten better. We've got react and all these different things that made things better to build. But if you zoom out a little bit, a dashboard that was built or a piece of software that was built in 2010, 2011. Other than the pixels themselves is not that different 15 years later.</p><p>&nbsp;Aarthi: Yeah.&nbsp;</p><p>&nbsp;Anil: And for an industry that prides ourselves in we're pushing things. It's all new. You haven't seen anything like this. It just doesn't compute for us.&nbsp;</p><p>&nbsp;Aarthi: But it's similar to what you said about Figma too, right? Like until Figma came along and challenged the status quo on design.</p><p>&nbsp;Especially, this multiplayer design earlier, again, lots of design companies came in and said, this is the state of the art with respect to prototyping wireframing, but it was all like nice little cleaned up user interfaces and very incremental with respect to what they ship.</p><p>&nbsp;And then Figma came along and said, actually, no, like we are going to change how people think about it, not just designing, but. Collaboration, FigJam and everything else like this completely changed it. So I think now we are at that moment where we are at the Figma for design, but now in like writing software and programming as such.</p><p>&nbsp;So I think from here on, this is like the fork in the road, where I think you're going to see better and better things happen with respect to software.&nbsp;</p><p>&nbsp;Anil: I hope so, because so many people have pinged us saying are you interested in turning command into a separate company for every piece of software?</p><p>&nbsp;Yeah. And we've gotten so many pings about that. We're just like, this shouldn't be stuck with just networking. It should be for like all software. Every software I use kind of thing, it should go. So I am very like hopeful on where this all goes.&nbsp;</p><p>&nbsp;Aarthi: Yeah, that's awesome. A pivot in for questions.</p><p>&nbsp;You talked about talent and I want to double down on that because I think from what you've written, what you've talked about talent, whether directly or indirectly, something you cover and focus on quite a bit. How do you find good talent? How do you spot them, fund them? You do a lot of work behind the scenes on this, but how do you find good talent?</p><p>&nbsp;Anil: I hope you guys haven't found everything. I try to stay behind the shadows as much as possible.&nbsp;</p><p>&nbsp;Sriram: We have our sources.&nbsp;</p><p>&nbsp;Anil: This is probably the thing I care about in like the top five things outside of meter. I wrote this a couple of years ago that got very popular which is the average age everywhere we do is increasing dramatically.</p><p>&nbsp;That's good. You accumulate knowledge and you're making decisions with better informed decisions over time. And, you have a pattern matching and other things, but the really bad parts are two things. One is the burden of knowledge is increasing. Benjamin Jones and others have written a lot of great papers about this over the years.</p><p>&nbsp;Matt Clancy, who's an independent researcher now, Matt's written really great things about the burden of knowledge increasing. No matter which field you pick, the date or age of first achievement is increasing dramatically. And that's not just true in like science and math, but it's also true in film.</p><p>&nbsp;It's true in the government. The average age of a Congress person is growing something like four and a half months every year. The average age of a somebody who runs a university is growing similarly. If you look at the NIH directors or who are PIs at NIH, basically everywhere that we see people are getting older before they have responsibility.</p><p>&nbsp;And that could be good. But if you turn back the clock before, It was young people that did all the most impressive things from Watson and Crick to the Macintosh team. I think the average age was like 23 or 24, but&nbsp;</p><p>&nbsp;Sriram: . The founding father,&nbsp;</p><p>&nbsp;Anil: that's what I was going to bring up.</p><p>&nbsp;Founding fathers. That's probably like the best example possible is Washington will warm the few. That was the old age of 30. Everybody else was younger and that's like the greatest founding story possible. And, but I think all the talent side. There's two separate areas to think about. One is how do you evaluate the right talent for you, but also to how do you identify young talent that hopefully will get to run the world 5, 10, 15 years from now and outside of meter I spent a lot of time on the former a lot, and in that I believe in the entire approach of outbound rather than inbound, and I just think inbound is wrong for a lot of I'm happy to get into.&nbsp;</p><p>&nbsp;Sriram: Let's maybe break this down. So if I'm getting this correct, you personally, and I might be maybe, I don't know, revealing something which I won't talk about, but you personally, I think, been involved in funding a lot of different young people in lots of different ways.</p><p>&nbsp;How do you set this up? How do you find them? How do you evaluate talent? Just walk us through all of your system.&nbsp;</p><p>&nbsp;Anil: It's very rudimentary as it should be. So my brother and I have a very simple rule the last decade. If we come across something good on the internet that might be a YouTube channel, a blog post, podcast, gitHub repo, whatever the case might be. It doesn't really matter how somebody is creating it. And it ties back to why I care about the internet, which is you get to have access into all the nodes in the world very easily compared to before when you didn't. So if we come across this and we think like it's sufficiently interesting work and there's some spark of genius there and they don't have an audience at all they have low number of subscribers or something.</p><p>&nbsp;You guys are having me spill all my secrets, but we just send them, we just send them an email. That's very simple, which is how much money do you need to do this full time for six months? And that's the only question we ask&nbsp;</p><p>&nbsp;Aarthi: what is the ROI? What do you think about, or how do you frame?</p><p>&nbsp;Anil: Don't see it that way. I don't see it that way at all. I went to this university called George Mason, where I studied networking and economics. Probably one of the most eclectic group of people. You guys know a bunch of them too. Kaplan, Tabarrok, Cowen, Kling, Bottke, Hansen, all these guys.</p><p>&nbsp;Sriram: Yeah.&nbsp;</p><p>&nbsp;Anil: And out of all of them, I think obviously Cowen is the number one kind of pushing person on this for a lot of different reasons. And you probably know this Cowen had this great short, probably the, there's two essays he's written that are under a thousand words that I think his best work.</p><p>&nbsp;One of them is Raising the Aspirations of Other People. Yes, we said that could be one of the highest things you could be doing. And, we had this idea of doing this even before that essay came out. But I do think I got influenced a lot by Cowen and those guys. And my brother went to George Mason too.</p><p>&nbsp;So there's some sort of germination there. And what I was mentioning before growing up near HHMI. And whether it's Kay or Donald Braben's work and John Loinitis , there's this concept that in the world, it's much better to fund people rather than projects.</p><p>&nbsp;It's way better to fund people and let them fly. And this is the HMI, HHMIs thing too. So I don't really think about an immediate ROI at all. I think of it's The reason to make capital is to be a a oil for the engine and the engine is old and young people will run the engine and we can just be like a small lubricant, if you will.</p><p>&nbsp;Sriram: By the way, I have to say, this is the most Tyler Cowen conversation we've had in a while. At that post that you mentioned, it's probably, it had a huge impact on me. Because what you're talking about, where he talks about what the best thing to do to a young person is to basically get them to dream bigger.</p><p>&nbsp;And to show them they're capable of a lot more, as maybe one of the most impactful things you can do. Now, when I was researching this, there have been so many people who had stories about you impacting them. I want to talk about one. Tell us about how you basically made the Dwarkesh Patel podcast happen.</p><p>&nbsp;Anil: I don't think I did, but by the way like I, I get way too much credit for this. He, I think about is he's Charlie Rose or Oprah Winfrey or Howard Stern of the nineties and early two thousands. And he's got a magic in him. That's all him. But I think what happened is he had written a blog post.</p><p><a href="https://www.youtube.com/@DwarkeshPatel/videos">https://www.youtube.com/@DwarkeshPatel/videos</a></p><p>&nbsp;I can't remember if he had already started a podcast or not, but he had written a blog post about Einstein's Year of the Miracles or something like that. It was a long time ago, and he had one or two posts online, and I had come across it. Somehow, I can't remember how, and he was still a student at UT studying computer science&nbsp;</p><p>&nbsp;talking about immigration he was in a precarious situation because he was already over the age of 21. He wasn't even sure if he was going to get a green card to stay in the country, which would be a travesty for America. That was a whole separate topic. And then I just reached out asking him this question. And to his credit, he tried to lowball me as much as possible.</p><p>&nbsp;He's this is what I'm going to do with the money. He I just need like the bare minimum and other things. And I was just like, don't worry about all that. Just tell me, he's do you want to get updates? And what I'm doing with the money or doing anything, I'm like. I actually do not just do your thing.</p><p>&nbsp;And then possibly the other thing I tried to do is I was actually Sriram I'm just looking at you and I text three years ago or four years ago. I was like, you need to talk to this kid Dwarkesh. And you're like, I'll check it out. And&nbsp;</p><p>&nbsp;Sriram: you know what I should have done is I should have been like, If you start a podcast, I want to book out every sponsorship revenue right now at this current price point question for you. Do you think the world is better off with Dwarkesh doing a podcast or doing something different? Oof. ,&nbsp;</p><p>&nbsp;Anil: man, what a question. Maybe about a year ago I, a year and a half, maybe a year and a half, two years ago I had Dwarkesh over for lunch. He was just moving to San Francisco because I was pressuring him to move to San Francisco.</p><p>&nbsp;I was like don't live in Austin. You have to come to SF and he had you know, I was a good input I think a bunch of his other friends and other people and he finally made it over So when he made it over, my wife and I had him over for dinner and good dinner, he leaves and then I was talking to my wife.</p><p>&nbsp;I'm like, I don't know if I did the right thing. This guy is like supremely talented. And by the way, the thing about Dwarkesh is the reason he's able to do something as well as he does. Is he actually understands the things he's talking about, which seems like a very low bar, but it's a very high bar to understand things deeply.</p><p>&nbsp;He's actually a gifted engineer from my perspective. I was actually lamenting that I push him to like podcast and media stuff and stop it from starting like a great product in a company or something. And then I used to play the counterfactual to him a lot which is stop podcasting, go do something better.</p><p>&nbsp;And to his credit, he would push back hard. He's I don't, this is the thing. This is why I should do it. But I lamented for a while if I pushed him towards the wrong thing.&nbsp;</p><p>&nbsp;Aarthi: I remember Dwarkesh coming home, similar timeframe, I think. And Sriram and I were chatting with him and he was recording something.</p><p>&nbsp;And I looked at him and I was like, so you're moving to San Francisco. He's yeah. And I'm like, and what do you plan to do? He's podcasting. Like what? That's not a job. Like in my mind, I'm like. How can you do this for a living? He's yeah, I'm just going to try this out full time.</p><p>&nbsp;And I look at her and I'm just this panic look on my face and I look back at it and I'm like, have you had lunch? And he's no. Oh, can I order you to lunch? And he looks at me and I was like, I'm just going to get you food. Just went on this like full mom mode. Just trying to feed this guy and trying to make sure that he's not making bad life decisions by just going to San Francisco to podcast.</p><p>&nbsp;I'm like, rents are expensive. It's not going to pay you anything. Like you really have to seriously consider your choices here.&nbsp;</p><p>&nbsp;Sriram: So I think the takeaway from this is do not listen to our life advice. If you're listening to the show for the exact opposite. And by the way, Dwarkesh gave me a lot of the questions that I'm asking you because he had this whole set of things like you need us I know this, and this thank you.</p><p>&nbsp;Dorkish. Actually this is a question from Dorkish, which is if you done, you've done several brands, I won't ask you how many, but I know you've done several lots. If you had a pattern match, the ones that are worked out for some definition of worked out and the ones that have not. What would be the commonalities on either side?</p><p>&nbsp;Anil: Yeah, I think about this a lot and I don't know if I have a great answer yet. And we haven't done it a lot, by the way. I think we've only done something like a hundred, 120. So it's not like a massive scale.&nbsp;</p><p>&nbsp;Aarthi: That is a lot.&nbsp;</p><p>&nbsp;Anil: Not the skill we want to do it. And maybe I'll come back in a year or two. I'm working on some other thing that will be orders of magnitude more ambitious than that.</p><p>&nbsp;But I think the commonality probably is to the original discussion on parenting for a lot of them, they at least had one parent who believed they can do it. And usually what happens is when I, when my brother and I reach out to people, we're the first people generally to say You can do this seriously and it takes them back a little bit too.</p><p>&nbsp;They're like, what do you mean? For example, somebody we did this with a couple years ago. I think late 26 actually has a feature film coming out because of what the work they've been doing since then. And they didn't take themselves seriously too. They had the small YouTube channel that we came across and we just email them saying, have you considered doing this full time?</p><p>&nbsp;And they're like, Full-time. Question mark. Question mark. . That was like the answer. Yeah, but I think generally we're the first people that kind of push them saying, Hey, consider doing it full-time, things like that. But the kernel. Which is probably the biggest problem in the world to solve is can you give everyone a loving parent and a parent that kind of believes in you and I think for all of them there's this like kernel at some point the parent sees that they can do it especially when it's a young person when you're 18, 19,20, 21 ,22, I think there's something there, but I don't know if there's like a pattern fully, but that's one I've been extrapolating as I've got to know all these people.</p><p>&nbsp;Sriram: I just think what you're doing is so profound and beautiful and I've been trying to get it to keep another data and I'm going to try and follow that because I think it's super impressive, but I think there's a threat to pull on here on young people. If you had to figure out a way to increase the ambition bar for young people.</p><p>&nbsp;What would that be? And where would moving to SF stack up in the list of things that you tell them?&nbsp;</p><p>&nbsp;Anil: I think this SF thing is related to the fact that somehow culturally it's become true that to have an impact, you have to be a founder. And it's actually not the glorious being a founder. I think both of you have been founders.</p><p>&nbsp;I know Aarthi definitely has. It's brutal. Yes, you don't want to be a founder. You want to be founder only if it's like the only way you can do what you want to do, otherwise you shouldn't is the answer. Similarly, I think&nbsp;</p><p>&nbsp;I'm not convinced SF is for everyone, but what I am fairly convinced of is that is that agglomeration matters a lot. So whatever people are doing, how do you find your people? And for a lot of young people these days, by the way, it's just on the internet, right? It could be a Discord, it could be something else. You just find your people somehow. But I don't think SF ranks that highly.</p><p>&nbsp;But probably the highest thing is not waiting. The latest version of this meme is you can just do things and I think it applies the most to young people than anybody else. The downside risk is actually fairly low. You can just go back, do other things later on. And if I had to push people's ambitions is maybe take themselves more seriously than they do, because I think young people can actually have a big change.</p><p>&nbsp;And then I think actually reading stuff. From maybe before what's written this past 50, 60 years will give a better perspective. And the older literature, when you go read, whether it's like novels or biographies or other things, really pushes the idea that if you're young, you go do something and you have to get on with your life.</p><p>&nbsp;And I'd say it's like very simple things, maybe it's the point on Cowen earlier. Cowen had this like really great book on culture in 2000 that nobody reads basically, but it's like a fantastic book (link - <a href="https://www.amazon.com/Praise-Commercial-Culture-Tyler-Cowen/dp/0674001885">https://www.amazon.com/Praise-Commercial-Culture-Tyler-Cowen/dp/0674001885</a>)</p><p>&nbsp;Sriram: Trust me, I have no sympathy for Tyler. He has such a huge audience. It's okay if he has a book no one reads.</p><p>&nbsp;I'm not shedding tears for it.&nbsp;</p><p>&nbsp;Aarthi: It's not about Tyler. It's about us not benefiting from reading.&nbsp;</p><p>&nbsp;Anil: But maybe the most ambitious thing would be is like, how can we change it culturally? The young people can do things. We don't have to, I wish against the same thing. I have very few novel ideas and one of them is we should just push young people to do things faster.</p><p>&nbsp;Sriram: Can I ask you maybe a question tying back to the Indian origin story? Which is the stereotypical Indian upbringing is you get a good academic grade. You do really well in academics. You pick a safe profession. Engineering, being a doctor, lawyer et cetera, right? If you're in the arts you better not come back home.</p><p>&nbsp;And then there is almost this slightly risk averse life path that you're supposed to be on. Now, of course, this is a generalization. And I think often a lot of friends back home has shifted away from that. How do you think what you're talking about? Because I think there's an element of risk taking and entrepreneurship in there, which is maybe you drop out, maybe you go off.</p><p>&nbsp;You, you chalked your own course and you say the downside is low. Like this will not fly with a lot of our relatives back in India, right? They'd be like, don't have that podcast right now. It's a bad influence. But how do you think these cultural themes intersect?&nbsp;</p><p>&nbsp;Anil: Yeah, it's a really good question.</p><p>&nbsp;And it's actually one of the reasons we were so interested in film when we were growing up, film is like one of the best ways, I think one of the greatest mediums we've ever come across to actually have influence on people. Like. Whether it's like tropes on like optimistic sci fi to actually seeing folks how they build film could be one way to change culture.</p><p>&nbsp;I think especially India is so influenced by film. You could point to any state and have politicians that used to be in film. For a particular reason. And that's true in India compared to anywhere else at a different order of magnitude. So I think film is one way to do it.&nbsp;</p><p>&nbsp;Sriram: Should wealthy Silicon Valley founders and CEOs just be producing movies?</p><p>&nbsp;Aarthi: Should be in storytelling at least in some form.&nbsp;</p><p>&nbsp;Anil: Maybe I'll ask you guys. You guys are like at the forefront of media. And you guys are trying to do something new. Do you think that as whatever's happening continues, whether it's on YouTube or podcasts or other things that culture won't change at a lot of places, even like India,&nbsp;</p><p>&nbsp;Sriram: I'll give you two stories and an opinion.</p><p>&nbsp;One is I once run to Michael Douglas at an event. I was talking to him and I talked about Wall Street, which I was grew up in Washington. He told me That to this day and he's a lot older now, people come up to him and they tell him that I got into finance because of you, right? And he always goes I was, Gordon Gekko was a bad guy, right?</p><p>&nbsp;But there's something about that movie which made people want to get into finance. Same thing with the social network, right? Like Aaron Sorkin, right? Tried to portray a story of Mark Zuckerberg inventing Facebook just to get good looking women to like him. Absolutely a lie, right? But in terms of the job as an actor of Sorkin, because if you look at the last 15 years, I'll put once a month, I need a founder who's much younger than me because, they were probably about 15 or 16 when the movie came out and they saw social network and they were like, I want to do that, right?</p><p>&nbsp;I want to buy the Harvard dorm room and turn it into my, my ping pong table. So I absolutely think storytelling is out of culture and I've seen this in India. Where I'm so struck by how many Bollywood movies I see or movies across the board have entrepreneurs as our protagonists.</p><p>&nbsp;I think this is very true. I think this is actually very underappreciated by Silicon Valley. And it's always frustrating to me that we basically outsource the depictions. Of what we do to a set group generally don't like us and full disdain. And whenever I see, so for example, a popular show on HBO or whatever.</p><p>&nbsp;So I think buoys and storytelling can absolutely shape culture. And I actually think that one of the ways that we should be investing and folks with the means to do and and I think Aarthi and I and a few others are trying to do this. Is absolutely produced content, which does this what is the modern star Trek, which can inspire people? What is the modern version of any inspiring sci fi or business movie? I absolutely think that's required.&nbsp;</p><p>&nbsp;Aarthi: I was going to say this too, right? Like one of the reasons I think we've been doing this podcast, we've been doing this for over three years.</p><p>&nbsp;It started out as this, okay, we are on clubhouse and we want to have this like live audio conversations. But then we quickly realized that, Hey, there was this dirt of. People talking about what we do, like we, in a sense, like people in technology, but in a way that is like real, not some like really contrived version.</p><p>&nbsp;I think at that time, like 10, 12 years ago, when I first moved to Silicon Valley and then I went to a different city, people are like, do you write code like in the matrix? It was like, Oh my God, they have no idea what I do. This is bad. They think I'm like Morpheus or something. But then I, the other part for us, for bees specifically is, I got tired of the non optimistic portrayal of technology. And and this is whether you think of it as in like journalism, how it is reported, how founders are talked about. I think there is we could have technology has given both Sriram and me everything, right? The, this is what we know, this is what we do, this is what we've built our careers on, our lives on, this is how we met.</p><p>&nbsp;And so for us, it became a second job almost to be this face of optimism, techno optimistic sort of faces here. And we take this very seriously because I think there is a lot of goodness that comes from People in tech, people working on technology, people building technology products.</p><p>&nbsp;And I think it's worthwhile spending time telling those stories.&nbsp;</p><p>&nbsp;Anil: I'm not that worried about it anymore, to be honest. Maybe this is counterintuitive, but but the same as policy influence. So 10 years ago, the. Knock on Silicon Valley was that they know how to build stuff, but they really don't understand DC.</p><p>&nbsp;They are not gonna figure out how to do any work there. A bunch of gray hoodies, and they don't really want to talk to anyone to now. 10 years later, Silicon Valley is accused of having too much influence in D. C. Like, that's the new thing. So I actually mean, I think very similarly is going to happen to film and storytelling.</p><p>&nbsp;Sriram: Yeah, our show is supposed to be very optimistic, but let me hate on a particular movie for the next 30 seconds, right? One of the worst movies I saw in this particular context in the last year was Knives Out to The Glass Onion. If you haven't seen it, trust me, you're not exactly missing out on cinematic history.</p><p>&nbsp;But spoiler alert, the villain of the movie played by Ed Norton, is a tech billionaire who's loosely modeled on Elon, but a few others, right? One of the things which strikes you when you watch that movie is that they find it really hard Grant Johnson, the the maker, finds it really hard to reconcile why this guy, Ed Norton, is despicable, he's a bad guy, I get it, But he's also built all these companies, so they can't reconcile the achievement with his evilness. So if you actually watch the movie, there's every single time they talk about his company It's as if he has stolen the idea from someone else exactly a sequence Where you know the core idea for the company is to go actually stole from somebody else because it was so hard for them To reconcile this and I watched this and I said, okay We at Silicon Valley need to do a better job of telling our own stories, right?</p><p>&nbsp;Because otherwise we're going to get these folks, hating on us till the end of time so My hope is for those of you watching this, right? Let's go make something, makes glass onion knives out to disappear and we get an optimistic take on technology.&nbsp;</p><p>&nbsp;Anil: This is personal crusade now, but what I think precisely what you mentioned is that I think the biggest thing that will change over the next decade and why I'm not worried about it is.</p><p>&nbsp;I think it will go from this concept that if you have an idea, that's how things happen. But the way companies are actually built is massive amounts of pain.&nbsp;</p><p>&nbsp;Aarthi: Yeah, grinding. And&nbsp;</p><p>&nbsp;Anil: You just endure that for a really long time. So much so that even if somebody told you all the right ideas. You still have to work hard to make it happen.</p><p>&nbsp;Which I think a film has the most amount of power to be able to do that was to show how that actually happens because it's the work is where all the magic is not in the idea as I do think ideas matter. I'm not in the spectrum of like ideas don't matter at all.&nbsp;</p><p>&nbsp;Aarthi: I think&nbsp;</p><p>&nbsp;Anil: yeah, even with this idea, you still have to do the work.</p><p>&nbsp;Sriram: I said,&nbsp;</p><p>&nbsp;Aarthi: what am I? Wait. I think PG program has this thing where he says, I think in my batch of YC or something, he said this where it was like, the movies, you see this montage scene where they're like repairing, they're doing stuff and they're getting, that is the thing that is the start of building, they skip past that.</p><p>&nbsp;And it's hooray, we made it. It's no, but that is the part that you're going to be doing for the next decade plus. And I always thought that it was like, it just stuck with me for a while because it's so true.&nbsp;</p><p>&nbsp;Sriram: Yeah, I always that's so true. Like whenever I talk to movie makers, they're like, what was the drama?</p><p>&nbsp;I was like, the actual drama was when you have that some person having that aha moment They were like, wait, say that again. What did you say? Aha, right? And they crack it right and they write it with a whiteboard and then it's like boom And then you're you know, that is the actual hard shit.</p><p>&nbsp;Okay, that's the rocky climbing up, you know Running up the stairs moment, but you know what? Can I go back to hating on knives out? I ran into ed norton at an event, right? And I gave him for five minutes, the real life version of this. And I'm pretty sure Ed Norton thinks I'm a total crazy person. There's an unresolved tension between me and him, but moving on.&nbsp;</p><p>&nbsp;Aarthi: Which only one person is thinking about at this point. I'm sure Ed's who the hell is this random tall Indian guy?&nbsp;</p><p>&nbsp;Sriram: After this, I'm going to go and Google the reviews and let the internet know what I thought of that movie.</p><p>&nbsp;But it's also true. It's the same is true of Blink Twice, which just came out. There's so many of these movies. How many movies have you seen where there's a Hoodie wearing villain, and who's nerdy, speaks quickly. Which, by the way, tells me that movie makers are lazy. Have you seen Mark Zuckerberg these days?</p><p>&nbsp;That guy looks cool, okay? He has a chain, right? You need to upgrade your wardrobe. But okay, let's move on. Let's move on. We can maybe edit this whole stuff out, but Okay. I feel personally. Okay. Now&nbsp;</p><p>&nbsp;I have some rapid fire questions for you. Why don't we know how planes fly?&nbsp;</p><p>&nbsp;Anil: Oof. I think there's engineering challenges.</p><p>&nbsp;I think there's economic challenges too. At one point, starting an airplane company was like the coolest thing in the world. Talking about film, catch me if you can is amazing and that shows a bit of that. And I think it's one of Spielberg's best films.&nbsp;</p><p>&nbsp;Sriram: Actually, wait, can I get, do you know something about catch me if you can?</p><p>&nbsp;Nothing in that movie is true. It's a movie. It's all been true to be a lie, right? Like the whole movie is fake.&nbsp;</p><p>&nbsp;Anil: That's the whole point of the movie is that you can, just like how the character fakes everybody else, the movie is supposed to fake the audience too. So I think there's like a engineering things.</p><p>&nbsp;I just also don't think it's like economical to care about it as much anymore. But what's surprising to me is even given that we're all okay. Just getting on a fly plane all the time and just fly. That speaks to like convenience more than anything else. What human nature kind of draws to, and one of the reasons I ended up writing what you're referring to is this was in the midst of all the safety stuff having to do with models and other things.</p><p>&nbsp;And my point with a lot of the people that actually are building this thing is I don't think normal people actually care if it's good. They'll use it. They actually don't care how it works. They just do not. And I was trying to make a point on whether it's planes or chemistry or fire or models or gravity or a bunch of other things.</p><p>&nbsp;We don't really need to know majority of us, if we're not working on something day to day, we are okay just being the consumers of something.&nbsp;</p><p>Sriram: That is true. By the way, the post is it's at your website, <a href="http://anilv.com/understand">anilv.com/understand</a>, I'll drop a link. And there's a bunch of questions in there, which we, which you, trust me, send you on Wikipedia rabbit holes.</p><p>&nbsp;For example, I did not realize that we do not know how general anesthesia works, which it slightly scares me. Okay. What, how did you meet and what have you learned from Sam Hinkie? Mr. Process. Trust the process.&nbsp;</p><p>&nbsp;Anil: Oof. I can't remember how we met. It might've been Dan Romero who was one of the first people I cold emailed, maybe 10, 12 years ago at this point Dan had this really great blog post about this thing called CJ DNS.</p><p>&nbsp;When. This was crypto was just getting started and DNS is one of the underpinnings of the entire internet infrastructure and Dan wrote this post about, can you actually make DNS better by actually having it be distributed and all these different things? So it could have been Dan or it might have been John Collison.</p><p>&nbsp;I can't remember who, but the things that I learned from Hinkie, I wish he can teach the world. How to care about family. I think that's one thing. He has a beautiful family that I really admire just the way they are and how they function and those things. And I think he is okay being wrong for a really long time.</p><p>&nbsp;And that's an underrated quality. Most people want credit as fast as possible and there's a lot of benefits for that because you can accumulate and get hit the ground running and you hit some escape velocity, but he's one of the few people that I think is okay being wrong for a really long time, or at least perceived to be wrong from the outside for a really long time.</p><p>&nbsp;And then I think a lot of people talk about being long term. This guy's actually long term, whatever the meme from from doofness our plans are measured in centuries. I think it actually applies to hinky in a lot of ways.&nbsp;</p><p>&nbsp;Sriram: What does kindness as an operating principle mean to you?&nbsp;</p><p>&nbsp;Anil: The best principles are very clear by the antithesis they are. And the antithesis of kindness for me is actually being nice. I'll give you an example in a workplace. Colleague works really hard for a few months or a few weeks or something. And they're getting ready to ship whatever they're building.</p><p>&nbsp;But it's not at the quality, it should be. The nice thing is to be like, Oh man, they worked on it for three months. I'm not going to say anything. It's going to shatter them. I'm just going to let them ship it. That's being nice. The kind thing would be to go up to them and say, I think the quality of this is not that good.</p><p>&nbsp;I think you can do better. Can I help you? And that distinction is really important. Somehow society has weird more towards niceness. But we hope at least at meter. We really care more about kindness rather than niceness.&nbsp;</p><p>&nbsp;Sriram: One larapid fireons, because I think Aarthi wants to ask you something else.</p><p>&nbsp;But Why don't you publish what you write, which by the way, multiple people asked me to ask you.&nbsp;</p><p>&nbsp;Anil: Okay, I'm in the spirit of spilling my secrets. I will spill another one today because it's you guys. I do, I just do it under pseudonyms.&nbsp;</p><p>&nbsp;Aarthi: Okay. , what?</p><p>&nbsp;Sriram: That is a anime PFP avatar, I'm following right now.</p><p>&nbsp;Which is actually Anil.&nbsp;</p><p>&nbsp;Anil: I wish I had that time. I have a newborn at home. I don't have any time for that, but long form was last decade. Yeah I do publish it. I think may, maybe, I had overestimated. All my different identities being in the same place. And I underestimated the benefits of those identities being at the same place.</p><p>&nbsp;So if I went back in time, I don't think I would do it again. No, knowing what I know now, because you mentioned Kwok earlier, he described Twitter and the internet the best way possible for our microcosm, which is tapping the tuning fork and see who it resonates for. That's writing and Twitter and other things, and I think if I had everything under one identity in one place, the volume of my tapping would have been much higher and the inbound would have been great.</p><p>&nbsp;Some of the greatest people I've ever met in my life that have had a big influence whether it's authors or research and other things that I'm probably going to work on for decades have literally just come from something I've written and they just like email and I think it was like entirely wrong about that, but I do publish a lot just not under my real name.</p><p>&nbsp;Aarthi: Amazing. Thoughts on US immigration?.&nbsp;</p><p>&nbsp;Anil: Oof, man. We should do like a whole episode about this. I think the post you're referring to, if you guys can link to it. William Kerr at, I think he's at Harvard now, probably did some of the best work here. It's the graph that shows the net migration of talent to the United States that passes a sufficient bar compared to any other country.</p><p>&nbsp;And almost any other country has been a net exporter of talent. And for the last few decades, U. S. has been the only country that's been a net importer of talent, and that is consistently true for a long time. But the real thing about U. S. immigration, if I had time to work on it, which I hope somebody else will pick it up, is there is almost no argument for any type of immigration, barring like criminals and other folks coming in, to be against legalized immigration. For low skill labor, high skill labor, anything. Even when low skill labor enters, we now have sufficient data on how that impacts other families household incomes positively. When we have higher rate of immigration, Because of certain agglomerated ideas, we have better invention and faster and you can measure it with patents or anything.</p><p>&nbsp;Immigrants turn out to be much better at starting companies because they're already taking risk by coming to a new country and they have nothing to lose and they can just go for it. But I think in general, U. S. immigration, we could have some sort of Maddy Glacier's goal of like 1 billion Americans or something and like work backwards like Times Square in Chicago, in San Francisco, the government should buy out billboards and work backwards from 1 billion to like where we are now.</p><p>&nbsp;And be like, we have to get to a billion Americans and some number. I'm not sure. I'm not saying and Iglesias has great points on why billion is right number and it's a round number and it's prerogative and all these different things. But in general, I think we should just push as society as culture and as citizens just ask for more net migration because there's no data and nothing that points it being bad at all.&nbsp;</p><p>&nbsp;Sriram: I think some of our most popular conversations have been on this topic. We, the last time we did one with DD we did a couple actually, I, we probably got hundreds of DMS with some really heartbreaking stories, but that is on the downside. We also got so many DMS and emails on people just needing amazing talent and suddenly to get them into the country.</p><p>&nbsp;And I think about if America is going to stay competitive and build all the amazing things that we wanted to, it needs to figure this out and not the and I and others are trying to help behind the scenes in a few ways, which maybe we can chat with you offline.&nbsp;</p><p>&nbsp;Anil: Please, I'd love to.&nbsp;</p><p>&nbsp;Aarthi: Yeah, this is something that we care deeply about.</p><p>&nbsp;I think we were like beneficiaries of the immigration process. We almost got kicked out of the country totally randomly through RFPs and stuff and and so for us this is this is the we had to pick one fight This would be it where we have to bring more, skilled legal immigrants into the country&nbsp;</p><p>&nbsp;Anil: I think just legal immigrants.</p><p>&nbsp;It doesn't have to be skilled. Just literally Legal immigration. Yes turns all to be good.&nbsp;</p><p>&nbsp;Sriram: Okay. All right. Now when we talked about this you said there's going to be a hostile takeover of the show by you. It's we, so the Aarthi and Sriram show maybe nothing without just being like open AI, but this is your chance Anil.</p><p>&nbsp;I know.&nbsp;</p><p>&nbsp;Anil: So maybe first question is why change the name from good time show to Aarthi and Sriram show&nbsp;</p><p>&nbsp;Aarthi: Sriram,this is on you.&nbsp;</p><p>&nbsp;Sriram: Good question. So the origin of the good time show is it's actually due to Aarthi where one day Clubhouse was the top of the charts and we always wanted to do something on the media space.</p><p>&nbsp;We said, okay, let's start hosting a show, but we just had no idea for what a name is. So two things happen. We were trying to figure out how do we figure out how to get across this idea of like optimism and hope. And it's just fun. Second is we've just seen the Safdie Brothers movie, A Good Time, which by the way, for those of you who have actually seen the movie it's a bit of an ironic title to put it mildly.</p><p>&nbsp;Aarthi: It. I liked it. Sriram did not like it. I spoke to Marc and he said it wasn't a great movie.</p><p>&nbsp;I was like, I don't care. This is our show. We're going to call it that.&nbsp;</p><p>&nbsp;Sriram: And honestly, it's one of those things where we didn't have a name. We had to go live, let's take this. And it starts. Right?</p><p>&nbsp;And we are off to the races. But about a couple of years ago, we had a chance to do a reboot where we moved the show off Clubhouse into this. And for me, I have, I'm a big fan of pro wrestling, okay. As is probably well known. And in pro wrestling one of the hardest things to do is to get yourselves over, right?</p><p>&nbsp;Yeah. And what this means in protesting parlance is the crowd. Reacts to you. They cheer you if you're a good guy. They boo you if you're a heel a bad guy and that's like table stakes to be like a really top performer over there But the really hard thing to achieve in pro wrestling which my friend Triple H said is to get people to chant your name.</p><p>&nbsp;And one of the things that people do is they invent these tricks and ways, either it's a song or they have a catch line where you are forced to say their name, because if nothing else, You remember what this person's name is, right? And for a 30 second quick, this is going to a deep rabbit hole.</p><p>&nbsp;But one of my favorite stories from Chris Jericho is he was facing this very new and upcoming wrestler called Fandango and this is the legend. And he was like, Oh my God, like the audience wouldn't even know this guy's name that I'm supposed to be facing. How do I get the audience to know his name?</p><p>&nbsp;So what he would do is he'd go out every single week and you would mispronounce and butchered his name on purpose. He was a bad guy, right? And the crowd will fix it at any rate. So anyway, so I thought we need to get ourselves over. And the simplest way is to use our names and that's the hence the Aarthi and Sriram show</p><p>&nbsp;there you go.&nbsp;</p><p>&nbsp;Anil: Awesome. What is the best Indian restaurant in London?&nbsp;</p><p>&nbsp;Sriram: I'm not touching this.&nbsp;</p><p>&nbsp;Aarthi: Our favorite is Dishoom. I know it's supposed to be the very, I guess&nbsp;</p><p>&nbsp;Anil: maybe, which Dishoom are you going to? You could go to only just one.&nbsp;</p><p>&nbsp;Aarthi: We live closer to the one in South Kensington. So the secret is if you live in a specific set of zip codes, Dishoom will home deliver.</p><p>&nbsp;And so every time we visited we always stayed in the touristy part of London where they do not home deliver. But then when we actually started living here last year, we opened up Deliveroo and we realized that they actually home deliver Dishoom, which is a horrible thing for health conditions, but great because we now can home deliver Dishoom without having to wait in line.</p><p>&nbsp;Anil: And do you guys get the secret whatever the coins are? Do you guys know about the secret coins edition?&nbsp;</p><p>&nbsp;Aarthi: I don't think Sriram knows about it, but yeah, I, yes. Yeah, you can roll the dice and you can like,&nbsp;</p><p>&nbsp;Sriram: yeah.&nbsp;</p><p>&nbsp;Aarthi: Sriram, what are you&nbsp;</p><p>&nbsp;Sriram: doing there? Okay, I just want to say two things on this topic.</p><p>&nbsp;On this show, we have no fear, right? We have said there is no risk in AI, right? AX risk is a meme. We have talked about Elon Musk and Donald Trump, right? But, I am not going to touch this topic of what is the best Indian restaurant in London.&nbsp;</p><p>&nbsp;Aarthi: We actually get hate, like people will leave comments, like 50 comments, how can you say this?</p><p>&nbsp;This is the best. That's&nbsp;</p><p>&nbsp;Anil: the reason I'm asking it. That's the reason I'm asking it.&nbsp;</p><p>&nbsp;Sriram: Okay. If you're a London restauranteur watching this and you want us to plug your restaurant hit us up, right? Hit us up.&nbsp;</p><p>&nbsp;Aarthi: I actually had a chef from one of the other restaurants in London tag me on Instagram and send me a DM being like, that's because you've never come to my restaurant.</p><p>&nbsp;And I was like, Hey dude, I'll be there,&nbsp;</p><p>&nbsp;Anil: Yeah. Got it. Sriram, did sneaker culture peak during the pandemic?&nbsp;</p><p>&nbsp;Sriram: Oh I guess the obvious question you would have to ask is how would you measure the influence of sneaker culture. Yeah, it's hard to judge. I would say one is there are some obvious negative signs like Nike stock price has gone down super south companies, some of the Sneaker resellers as companies are not doing really well.</p><p>&nbsp;Companies like Supreme, which I would say once you got some of the streetwear companies were also maybe not as hard as they used to be having said that my perception is this is a bit like hip hop or anything else where it's cyclical. And and I think if you look at say this is not exactly classic sneaker culture, but you look at the rise of something like Hoka or ON, or some of these other brands out there where everywhere I see people wearing those.</p><p>&nbsp;Or if you look at last year I really love what the Devin Booker is doing with the shoe. And so I think that is a space for us to move away from just yet another Jordan release or yet another signature brand to doing something more creative. But I know it feels like a little bit of a lull.</p><p>&nbsp;I can't remember the last drop, which made me really get excited, but maybe I'm getting a bit older&nbsp;</p><p>&nbsp;Anil: because that's also true. But I guess both of, a lot of people that are incredibly successful, incredibly wealthy what's happening that we don't have the Carnegie's the Rockefellers that taught us of the world that are actually having an impact with their capital.</p><p>&nbsp;What is your take on why are we not seeing Carnegie level libraries everywhere or other things whether it's here or in India or anywhere, what has changed culturally? You all know so many successful people. What's going on&nbsp;</p><p>&nbsp;Sriram: if you question in terms of, say, okay, are wealthy entrepreneurs having impact I would point to everyone from what Elon is doing to a bunch of others are doing with their companies as maybe the best manifestation of this impact.</p><p>&nbsp;There are a few other examples. The Collison's are obviously funding several projects, but outside of that, you have people like the Joe Lonsdale's funding a university in Texas. But I do think one place which has been lacking and I don't really know why. Is the tech entrepreneurial ecosystem has not really invested in the arts and so for some reason, and I'm not exactly sure I can articulate why I don't.</p><p>&nbsp;We haven't seen a I don't know an opera. House funded by, a Silicon Valley CEO at, there might be some. So please, if there is one, please come and hit me or a a gallery funded by that, it doesn't seem to be in the phenotype or the interest groups, and I'm not terribly sure why, but I would reject the frame because I think people are having tremendous amounts of impact at.</p><p>&nbsp;Maybe there is a little bit of bias because I would posit some of the folks that you're pointing to did some of these statements, like things that they did after they were done with company building a lot of the people that we know are still very much in the peak of their careers and building these amazing institutions.</p><p>&nbsp;Aarthi: I think I actually agree with the premise of the question. I think reason you're not seeing it. I think we briefly touched on this before. Yeah. We do seem to value instant gratification in different ways, and none of these are instant building Rockefellers, doing stuff, building libraries.</p><p>&nbsp;I think we've forgotten that things take time, and you have to build for the long term. And by taking away this sort of long term thinking and approach towards maybe not in my lifetime, but I'm going to sow the seeds and build a foundation, and, it will continue as a project.</p><p>&nbsp;Some of the biggest monuments all happened like that, where it took hundreds of years, but people didn't see it as, Oh, but it's not going to be getting, it's not going to get done in the next three years. So why should I do it? So we do have this sort of corruption of our psyche in terms of short term thinking we are having impact with products, like to what Sriram said on Elon and everybody else, but I do think the other projects that are not a part What you do as day to day.</p><p>&nbsp;I think we just stopped looking at it as it's too long term. I don't know if we'll see the benefit of it. So what, like somebody else will do it. It's not my job. And not think about long term project.&nbsp;</p><p>&nbsp;Sriram: I would say that seems to be like East Coast versus West Coast difference over here. Because we look at the East Coast, it is very common to see A typical master of the universe style hedge fund titan go out and sponsor library building, right?</p><p>&nbsp;Like I think schwartzman has done that and others have done that I don't think you really see that in silicon valley and that may be because silicon valley values the building of the new rather than maybe Carrying on the old. I want to touch on something else Which is one of the things that living in london has opened me up to is a real appreciation of history . Because you live here and one of my favorite things about the city are these little blue plaques that are various buildings where there's about a couple of hundred of them, I believe, and it tells you this famous person lived here like 100 years ago, right?</p><p>&nbsp;Aarthi: And and it is very cool. It's very cool. Like we walk by this one pretty often, which is T. S. Eliot. And it's amazing to me that T. S. Eliot used to live here not so long ago. In this particular house, like I can imagine like him eating breakfast and writing things and being like maybe I should publish this But it's incredible to see that&nbsp;</p><p>&nbsp;Sriram: yes, and I what it leaves me is a piece on two things One is it's just a sense of continuity in history.</p><p>&nbsp;They're like, oh my god It's like TS elliot like what have you done with your day, right? There's another part but there's another part of it which living in london brings you which I didn't really have in san francisco Where the idea that some of these institutions have existed for a long time before You And they're going to exist for a long time after you like I was talking to this trustee of a very well known historic institution.</p><p>&nbsp;It's been on for, 700 years, right? And they're telling me like, look, we are like the 50th, iteration of some set of people, right? And there's gonna be probably 1500 more. And our job is to try and make sure like 1500 more exist. And That style of thinking is just something I just don't think it's in the water in Silicon Valley.</p><p>&nbsp;We are so much more fans of let's just build something from a new and take over the world.&nbsp;</p><p>&nbsp;Aarthi: But also I think you can't get one without the other. I think that's the thing. I think the reason why Silicon Valley is so special and has all these people is it's also the sort of rejection of all these norms and people and thought of this I think when we first moved to San Francisco, Sriram, we used to be like, this is like Mad Max, it's like you keep going in this dusty desert and this rickety bus and, every once in a while, there is a passerby.</p><p>&nbsp;And you're like, do you want to hop on? And you just throw this thing out and just, they just jump onto this and you keep going, no one questions where you're really going, you're just going really fast and you have to have this like rejection of everything else to be able to do what you're doing in San Francisco.</p><p>&nbsp;And that's also what makes it really magical and special because the whole bunch of people who believe in themselves have this confidence and are completely okay with rejecting everything else outside of them.&nbsp;</p><p>&nbsp;Anil: You still have Sriram. eth in your name, even after it's become not in vogue anymore.</p><p>&nbsp;What do people not, what do people not get about crypto that you get still?&nbsp;</p><p>&nbsp;Sriram: It's a good question. Sriramk. eth, by the way. The I hope I got this spelling I would say a couple of things that, that crypto is, yeah, it's going back to a sense of point about history. I think crypto is a spiritual successor to so many things that came before it.</p><p>&nbsp;To open source to cyberpunk culture to those of us in the early two thousands who would add here here's your private key, please, encrypt things back and forth to me. That culture and so many of those trends led to crypto. And I think of it as the ultimate manifestation of what the internet should be architecturally where you have governance and the economics being pushed to the end Now.</p><p>&nbsp;Are we in one of the cycles where the numbers are low? Absolutely. But I do think in some ways this is the manifest destiny of the internet to go get there. And the alternative, if we don't get there, is one where a few large entities in some shape or form, be it the governments, be it the corporations, wind up controlling our destiny, which is not good.</p><p>&nbsp;I think what any of us want to sign up for. So yes, I am, I still actually, I'm getting out eth, in my Twitter profile, it's going to be out on for a long time.&nbsp;</p><p>&nbsp;Anil: What both of you to our point earlier are working on what is new media. And one of the points about that everybody says now is go direct with everybody going direct these days is now the best time to go back to old media.</p><p>&nbsp;If you want to have more kind of access, more distribution Is going direct like what, where everybody's going, you have to go to the other way now?&nbsp;</p><p>&nbsp;Sriram: Do you want to go or should I go?&nbsp;</p><p>&nbsp;Aarthi: Yeah, I'll go. I actually don't think going direct is for everyone. I think I think the whole go direct thing came because of a few reasons, right?</p><p>&nbsp;One, it felt like people who were telling the stories were not painting an accurate picture of what was actually going on. So even in some cases when you would do press interviews. Talk about launching a feature functionality, some event thing. It would almost always get written as something else or portrayed in this like spin of they're doing this because of this other ulterior motive.</p><p>&nbsp;And we saw so many of these, and and us like having worked with these companies, building these products, I saw this sort of firsthand at working at Facebook. And you realize that there's no such thing. Like we didn't think through this like super villainous way of doing things, and And so I think at some point people got frustrated and said, you know what, I'm just going to go direct which is okay, but I think going direct also means, the reason why this whole non going direct thing works is there's one source of distribution, which means there is this much larger audience that you would get, you can go reach them, you can go find cohorts of people you need to go talk to, all of that going direct means everybody has to now build a distribution system and everybody has to figure that out. And I think that is a challenge. And now everyone's got to go full stack, build the whole thing into it. I just don't think that is sustainable or scalable. Not everybody should be doing it.</p><p>&nbsp;If you think about a Zuckerberg or an Elon Musk, it makes total sense because I think they're in the past. They've had issues with that, right? Like they've been misunderstood, misconstrued, all of that. You're a small startup. You might, you should do the most pragmatic thing.</p><p>&nbsp;You should find a way to get the best distribution, no matter what that is. And and I don't know if you should just take what is what everybody else says and do that.&nbsp;</p><p>&nbsp;Sriram: I think the nail of the head or two points. One is, I think it's a reflection of the war between the people who cover the tech industry and some of the people in the tech industry itself.</p><p>&nbsp;Second is if you choose to go direct, what do you actually saying? It's okay, I'm going to produce content on the Internet, which has to fight for attention and fight for my audience. And I have it treated as a core competency for my company on top of everything else. My company or my person is actually doing right.</p><p>&nbsp;And maybe that's for you. Maybe that's not for you. I find the whole thing very reductive. I know where it comes from. I'm a big supporter. Just yesterday we had TechCrunch hating on Elon's idea of building robots, it's pretty frustrating, right? So I totally get the idea of the gatekeepers being frustrating.</p><p>&nbsp;Having said that, I would love to see a couple of things, right? One is a lot more different. Kinds of craft and modality and how you tell your story. For example, when people say growing direct, what they're actually usually talking about is I'm going to write a multi format exposed and maybe a little video, right?</p><p>&nbsp;I think there are so many ways, so many other ways of doing such a thing. For example it's been close to 20 years, but I still remember chrome google chrome. It came out, it had a comic book. I'm not sure whether you remember that, right? A classic comic book and or when when Square first came out, Adam Lissagar, who was the voice of Silicon Valley startups for quite a while, that maybe still is, right?</p><p>&nbsp;He had the voice, he would talk about Square and so one is I worry that going direct has taken away a lot of the creativity in terms of how you want to tell your story visually in a visual medium and a written medium. The second part of it is I actually think that it downplays the impact of curation and other creators.</p><p>&nbsp;For example, one of the things that Ati and I do on our show is You could, do this exact video by yourself, but we can hopefully get a different version out of you. And if you went on, say, I don't know, like Larry King when he was alive, or Oprah, they wouldn't get a very different version out of you.</p><p>&nbsp;And that version would not exist if you were just talking to the camera. So for example, I always think about some of my favorite New Yorker style articles. That is not going to be replicated by a tweet by other means. And so I'm a big fan of going direct. I'm really frustrated with the gatekeepers who hate our technology.</p><p>&nbsp;I worry that people underestimate the amount of effort it takes to create great content. We do this all the time and build an audience as a core competency. And finally, I worry that it takes a lot of the creativity. Of getting your story across in the best form possible. But yeah,&nbsp;</p><p>&nbsp;Anil: Do either of you think you're too online?</p><p>&nbsp;Sriram: Not me.&nbsp;</p><p>&nbsp;Aarthi: No, I don't think so. Why? Has anybody told you otherwise? I</p><p>&nbsp;mean I can't, I was telling somebody else this. I can't remember the last time I went for a full day without internet. Yeah, it just has not happened. I think Shaham actually tried. Yeah. He last, I think I,&nbsp;</p><p>&nbsp;Anil: I can't imagine him surviving what happened.</p><p>&nbsp;Aarthi: No. I can't. I can't. He didn't. You did it for, I think for two and a half days or something like that.</p><p>&nbsp;It was insane.&nbsp;</p><p>&nbsp;Anil: I'm impressed.&nbsp;</p><p>&nbsp;Sriram: Yes. I, I asked after two and a half days, have you met an Anathem by Neal Stephenson? So I was like, what has happened to civilization? Or he's, is food still a thing.&nbsp;</p><p>&nbsp;Aarthi: No, but I would sit right next to him and be texting and he'd be like, is anything happening?</p><p>&nbsp;And , I don't wanna know. I don't wanna know. It'll be okay. Yeah if you need to get on, get online, you shared it. He lasted for two and a half days. I did it. I didn't even bother competing.&nbsp;</p><p>&nbsp;Sriram: I think the part of just talking with a lot of other people is since I think Anil knows me a little bit and I am on every messaging medium responding instantly is that it is so easy to get sucked into what other people want out of you and not be able to do real deep work. And I think the worst problem is you can get a very fake sense of productivity where you're responding to a bunch of things. You're doing a bunch of things where, but if you look at it over a long period of time, you may not have actually accomplished something very meaningful.</p><p>&nbsp;But this is not a thing where I worry about it, but then I go back and my screen time keeps going up. And so I'm not fixing it.&nbsp;</p><p>&nbsp;Aarthi: We do like to make fun of, we do like to make fun of Europeans when we first moved here last year. We would get these out of office notes being like, we are on holiday, not checking anything, like, how did You have internet where you go, you have phones.</p><p>&nbsp;How can you not check stuff? But it is a thing apparently.&nbsp;</p><p>&nbsp;Anil: That was going to be my follow up follow up questions for you. I think you are one of the reasons group chats have gotten so big in Silicon Valley. Do you think you've hurt productivity for everyone pings everybody gets?&nbsp;</p><p>&nbsp;Sriram: I've heard this from multiple people.</p><p>&nbsp;Yes, but it is worth it because it is entertaining and fun. And so what if, your next AI breakthrough is delayed by a year or so, it's, it's worth getting out of the gossip.&nbsp;</p><p>&nbsp;Aarthi: I think whoever says otherwise gets kicked out of these groups.&nbsp;</p><p>&nbsp;Sriram: That is true. That is true.</p><p>&nbsp;Anil: What would it take for you guys to move to India? What would have to be true?&nbsp;</p><p>&nbsp;Aarthi: Oh we will actually do it at some point in our lives. One I think about growing up in India and I think it's a very different experience and I feel like our kids don't have that. And we want to go back parents are getting old, so we want to basically have them have some consistent amount of time being there.</p><p>&nbsp;We don't know when that is going to be, but we do want to go to India and live there for a bit.&nbsp;</p><p>&nbsp;Anil: Both of you worked on so many different products. Yes. Was there something you both were like sure is going to be successful and it didn't pan out to be like you were absolutely sure it was like this is going to be successful. I know it will be and, gets to market or gets the customers and just doesn't have the impact.&nbsp;</p><p>&nbsp;Aarthi: A bunch.&nbsp;</p><p>&nbsp;Sriram: I would say. The one that I worked on for a long time was do you remember what Yahoo pipes was? Oh yeah. I think there was an era for those of you young ones listening to this where in 2004 Gmail blew everyone's mind with the Gmail UI because one, it gave you one gigabyte, but second it was, it used Ajax XML for the first time.</p><p>&nbsp;Where you click on something on the page and it will reload, right? And we were like, oh my god, we have first we discovered fire and now we have instant pages reloading. Oh my, what is humanity going to do next? And, but it blew our minds. And so there was an era from 2004, I would say to roughly 2009, 2010.</p><p>&nbsp;Where this was the age of mashups and Yahoo pipes and I worked on a competitor called Microsoft pop fly for a little brief period of time, but the idea was you would take this open API from, say, Flickr. You would stitch it together on Google Maps. The idea was that this data, which is going to flow all over the Internet and you could stitch together these websites like a little bit of programming building blocks and I would say one like six few things. One that didn't work out for a few reasons. I would say one, the rise of Facebook and walled gardens, the rise of the advertising is the dominant way to monetize these products really destroyed that whole ecosystem. Zapier may be the only company which has really lasted doing that.</p><p>&nbsp;It's interesting to think about that now, because in 2024, We might now, with agentic applications, may have a shot where maybe what we were lacking then is the right programming metaphor or the right smart agent to do the stitching together for us. And that's what it's doing. Of course, the question of how do you monetize this is still TBD.</p><p>&nbsp;But I was so sure in 2005 that, mashups was going to be a thing. Didn't really turn out to be a thing.&nbsp;</p><p>&nbsp;Aarthi: I was thinking, I don't know, Kinect was, I think, one of the early ones. Kinect was this product from Microsoft X Box where it was this competitor to Wii, but without a handheld controller, it would do motion sensing and you can do gaming and huge push.</p><p>&nbsp;I worked on, I w I worked on the loader, binder, compiler, SDK, like the very low level parts of it. So I didn't work on any of the sexy cool parts of it, but it was like, Even going through the motions of shipping that software, it felt so dramatically different from anything else that had been on the market.</p><p>&nbsp;And I was very sure that this was it. Like we'd crack something in how Normal people are going to game and interact with systems with computers and the game against each other, play against each other, but it never really took off after that first initial burst of momentum. And I think a lot of it comes down to having good games.</p><p>&nbsp;I think when later on with Oculus and everything else, do you see that where. You have to have good content libraries, game libraries to be able to bootstrap these to sustain the momentum. But technology was way ahead of the curve, but everything else didn't quite follow through, like building the platform and ecosystem around it.</p><p>&nbsp;Netflix, I think Netflix 3D, nobody knows about this, but&nbsp;</p><p>&nbsp;Anil: Oh, I didn't even know this existed.&nbsp;</p><p>&nbsp;Aarthi: Yeah, so it was one CES where I went and begged a bunch of, 3D TV manufacturers to, and so to let me go build Netflix 3D where you have to do left eye, right eye encoding, like you have to delay just just a little bit.</p><p>&nbsp;And so we built this and got a I think got seven movie titles licensed to be able to play it. And people were like, yeah, this is not a thing. Don't waste your time. So it didn't even launch. That was how much of a flop it was. But we ended up learning again. I worked on like the Netflix SDK.</p><p>&nbsp;The company is moving from DVD into streaming. So my job was to build a streaming player software. So 3d was very much like an afterthought. It's let's try it. Very few consumers actually had 3d TVs at home. So it didn't even make sense.&nbsp;</p><p>&nbsp;Sriram: Arti, do you want to describe the story of Netflix and Quickster and the blue teal shirt?</p><p>&nbsp;Aarthi: I do not. I do not watch SNL. It's still there. It's still live. So&nbsp;</p><p>&nbsp;Sriram: give us a little bit like what happened there.&nbsp;</p><p>&nbsp;Aarthi: So 13, 14 years ago. So Netflix, you had the DVD side. I also had the streaming side. Netflix was a big company, public company, but known mostly for streaming for DVDs, the red envelopes that people got in their houses.</p><p>&nbsp;Biggest competitor there was Blockbuster, not really the streaming side of things. And then I think Reed had this sort of. Initially, he said we should have this hardware device which I'm going to build, and we are going to do streaming in this hardware device, and that's how it's going to be.</p><p>&nbsp;And then last minute, like literally, I think, a month or two before launch of this device, he basically said, actually, every device should be a Netflix device. I reject this notion of the single device. And said, internet's going to catch up, we're going to have better streaming capability, we should just make every device a streaming device.</p><p>&nbsp;And spins off this company into Roku, which is a different public company now. And my job at that time I joined was to build the SDK that goes into TVs and set up boxes and Blu ray players. And figure out run times for each one of these and how do you ship for them, right? But then they had this decision point on what do we do with this DVD business?</p><p>&nbsp;And at that time, I think one morning, Reed basically says again, classic Reed was like, I'm going to spin this into a different company, and that company is going to be called Quickster. None of us had heard of this name. We were like, what is going on? There's this guy on Twitter who had this handle at Quickster.</p><p>&nbsp;And turns out that he, he was just like some random guy. And people were like telling him, hold on to this Twitter handle. People are going to give you billions of dollars. Because Reed has now made this his company name. And I look over at Netflix at the time, very small, I think 70 engineers or something even for a public company.</p><p>&nbsp;And I look around from my first floor, second floor office or the cafeteria downstairs. And there is Reed sitting there, couple of mics in this like corridor, wearing the steel shirt, now famous steel shirt, doing this interview, basically saying we are breaking this company into two companies. The Netflix is the streaming business and the old thing is the Quickster and and we all were like, what just happened here and the stock price tanked so much it nearly killed the business for no reason, just with the press announcement, it was like.</p><p>&nbsp;It was crazy, but Reed to his credit three months later, said Netflix has this culture of I loved the culture of Netflix because they take movies very seriously. And every three months we used to do this all hands meeting, but they would bus us over into this movie theater and rent the whole place for the day and just do this all hands.</p><p>&nbsp;And every Quarter we had to choose a movie that depicted how this quarter had gone. But that quarter, the thing that showed up on the screen was the comeback kids, because we had somehow survived this whole adversity and the whole place, like it just broke out into this applause for a few minutes.</p><p>&nbsp;And it was so emotional for all of us because we had actually come through this really hard time for the business, all because of this one press announcement and Reed was very emotional as well. It was crazy.&nbsp;</p><p>&nbsp;Sriram: Yeah, by the way, can I add sorry? I know we're getting super long, but I want to add one technology Which I believed in which failed there's a lot of these, right?</p><p>&nbsp;Active X.&nbsp;</p><p>&nbsp;Anil: Oh&nbsp;</p><p>&nbsp;Sriram: that's a deep cut. Deep cut, right? Like, all right. Only the OGs will understand this reference, but for you young ones, right? Like back in the 90s, Microsoft had huge envy of Java, right? A Java was going to be this huge. It was this huge programming language. And one of the key things Java had going for it was this thing called the applet, which was this piece of software that Which in theory, you could embed and it would run as a program inside a web browser.</p><p>&nbsp;And the idea was like, hey, Sun was saying like, hey, who needs desktop software, right? Who needs Microsoft and Sun can rule the world. Spoiler alert, they did not. They destroyed themselves. But that's a story for another day. But but Applets which occupy the terrible technology because you just spent like half, like several minutes waiting for the Java to attempt to load.</p><p>&nbsp;Generate so much envy that Microsoft's Hey, we need our version, right? So they basically invented the standard called ActiveX, which basically was a way where you could take theoretically any piece of code and run it inside your program. So you could be like, Okay, here's my Word document, but I'm going to, because I'm feeling a bit crazy, I'm going to stick my Word document into my Excel spreadsheet like, and who can stop me now, right?</p><p>&nbsp;Like a little bit like what? A little bit of the demo Anil did two hours ago, except that it didn't work. It sucked. It was incredibly complicated, had a million different security issues. And ultimately, I think it's probably lingering around in some way, which is generating zero days every like few weeks or something, but ultimately died.</p><p>&nbsp;But, okay, sorry, next question. Okay.&nbsp;</p><p>&nbsp;Anil: So I have three last questions. Both of you were at Microsoft, I think, right? I remember correctly. If I talk about Microsoft with both of you, did you imagine Microsoft to be this successful? Cause there was a point in time, people gave up.&nbsp;</p><p>&nbsp;Sriram: Yeah. Let me just stop you.</p><p>&nbsp;Absolutely not. Because when I left Microsoft, I was like, the stock is not going up at all. I am going to sell everything right now. That is the end of Microsoft. Funny enough, I was talking to a I don't want to name this person. He's a very let's say a very well known person in our world.</p><p>&nbsp;And he's made a lot of money investing, but he told me, you know what? I could just kept my Microsoft stock and it so well. So I would say, look, I think the serious version of the question is we left Microsoft in 2011. I think Aarthi left slightly after I did. That was a very different Microsoft, right?</p><p>&nbsp;Microsoft had missed a search error. It had obviously missed the mobile era, right? Windows spawned wasn't exactly, setting anyone on fire People were not standing in line for the brown zune Another deep cut by the way&nbsp;</p><p>&nbsp;Aarthi: I love brown zune speak for yourselves</p><p>&nbsp;Sriram: I'll show you about the brown zune by the way, right for those of you don't know just google it. I heard that's one of our One of my friends got his car broken into in san francisco around that time and the thieves took everything You But they left the brown zune so anyway nobody wanted to squirt with the zoom up. So if you understand the difference drop in the comments, but there was a sense of like glory days of Microsoft are behind us.</p><p>&nbsp;And I remember in the last few weeks at the company, Satya, who was a senior executive, but not CEO, he actually tried to talk us out of leaving, but I remember telling him, I was like, no, there's no future here. I just want to go to Silicon, do something else. I just want to say how much credit Satya deserves.</p><p>&nbsp;For turning it around in so many ways removing windows as the focus of the company. Really embracing the cloud making incredible acquisitions, like getting GitHub, getting Nat and so many moves because that was not expected at all.&nbsp;</p><p>&nbsp;Anil: Maybe one question there is, are there a lot more companies like that?</p><p>&nbsp;That if the right Satya were there, the outcomes would be different.&nbsp;</p><p>&nbsp;Aarthi: I think so. I'd like to think so. I think I think a lot of it is my hot take is a lot of these companies in the last 10 years in this ZIP era, we have execs who are just larping. They're not meant to be execs at these companies. They look at it as like this sort of stereotypical, this is what we do.</p><p>&nbsp;We performance manage, we do this thing. It's very much like Gameplay. And we need to have more people who will actually, do what the term you said before is go do shit, like actually go, take some bold moves, ship some software, do things that are like, not just what is expected of an exec as such.</p><p>&nbsp;And I think we are going to have this we are already seeing a lot of these, these big exec roles is having this like time of reckoning there. And I think that's actually a good thing where you're gonna have to see you will, you, there is gonna be a more positive shift there.</p><p>&nbsp;And I would like to think that some of these companies, if they're run well and have the right leaders in place, you will see them do really amazing things.&nbsp;</p><p>&nbsp;Sriram: Yeah. I was hoping that Aarthi would start all the execs by name that you thought was larping, but we did not get there. Maybe for paying subscribers, later</p><p>&nbsp;Anil: I, if there's like a nonfiction topic, could be a company, could be an organization, anything. That you could apply a great author to, to write a book for both of you. What book doesn't exist that both of you want? It could be on company, topic, organization, anything, non fiction not fiction.&nbsp;</p><p>&nbsp;Sriram: Why don't you go first?</p><p>&nbsp;Aarthi: Oh man, it's a good question. Oh,&nbsp;</p><p>&nbsp;Sriram: I think this is going to be an obvious answer for people that know me, but I would say professional wrestling is dramatically underappreciated as an art form, as the impact doesn't popular culture. As a way to understand storytelling and character and also the fact that it by itself, it has these huge, crazy, larger life characters, huge, crazy real life storylines where the reality is way more dramatic and unbelievable than anything which happens on TV.</p><p>&nbsp;And I also think it's shaped American politics. Kayfabe. And comes from wwe. The art of cutting a promo comes from professional wrestling. So I think there's a deep impact on American culture. It is very Americana in a very deep way, and I still think it's am appreciated. And I would love to see a a serious rider actually go really dig into it.</p><p>&nbsp;Aarthi: Okay. I don't know. I think yeah I like some of the Bollywood movies. You, I think we talked about how. We are now talking more about entrepreneurship. I think one of the movies that started doing that was Guru like touching on Dhirubhai Ambani and bringing, building off of his backstory, but we really haven't seen like a proper biopic like actual collective material on how do these people do what they did, especially for Dhirubhai Ambani.</p><p>&nbsp;I just don't think we've seen a lot there, but and maybe it's The state that we are in, there's just like a lot of politics and, disagreements on how things exactly happened. I don't know why, but I think there should be more people telling stories of entrepreneurs, especially in India, where, I think when we grew up in India, it was a very different time.</p><p>&nbsp;Entrepreneurship was not something that you could go out and say, Hey, I'm starting this company people will be like, okay, but what are you actually doing? What's the job? Now I think it's become more accepted and I think we need to be telling more stories&nbsp;</p><p>&nbsp;Sriram: Yeah, i'm actually thinking of working on a long term project.</p><p>&nbsp;Maybe a book around this where we really dig into how founders today actually operate because I feel like a lot of storytelling around founders And kind of slips into we're launching something where there's a press profile piece Or it is the story of the arc of the company's formation.</p><p>&nbsp;I'm very curious in how does the regular CEO that you and I might admire, how do they make decisions? How do they spend Monday mornings? How do they handle hirings and firing? So how do they think about motivating people? There's a lot of stuff in there. So I'm thinking of working as the project. I'm talking to a few people, but that's something I think they're just massively undercover still.</p><p>&nbsp;Anil: Okay. Last question for both of you,&nbsp;</p><p>&nbsp;which movie that's an Indian movie that came out Before 2000 and could be any Indian language that everybody should watch.&nbsp;</p><p>&nbsp;Sriram: A non, even a non Indian audience should watch.&nbsp;</p><p>&nbsp;Aarthi: Wow, okay. I have a one of my favorite, yeah, one of my favorite is this Tamil movie.</p><p>&nbsp;It's called Iruvar. And, it didn't do very well.&nbsp;</p><p>&nbsp;Anil: Rahman's best work, I think, to date.&nbsp;</p><p>&nbsp;Aarthi: I Think so too, arguably his best work. And also the&nbsp;</p><p>&nbsp;Anil: background score, I think, is one of the most gifted things he's ever done for anyone. It's incredible. PC Sriram is just God in that.</p><p>&nbsp;Aarthi: Fantastic. Fantastic. All across the board, like the talent, acting, music, everything, cinematography, I think it's such a good movie that it dams, it didn't do well commercially but once every few years I would go back and watch it and it's just deeply inspiring, it's it's a real life story, but it's made into a movie. So it's it dramatized a bit, but I really liked that movie.&nbsp;</p><p>&nbsp;Sriram: By the way, I didn't think we were going to mention Rahman on this episode. So that's amazing. I would say if I had to pick one, it's maybe an obvious one.</p><p>&nbsp;But I'm going to pick Indian which is the classic Kamal Haasan movies, I forget which year, maybe it's like early 90s, which came out. And the reason 95. 95. And so just for those of you who may not, who may not be familiar, who may not know the movie, the basic construct of it is there is a it's basically about corruption in India and the fight against it, and you have the lead protagonist wind up having to fight corruption, basically assuming this sort of mercenary role.</p><p>&nbsp;And eventually, it's a bit of a spoiler, but it turns out that he will run into his own son, who actually would be corrupt, and what happens over there. And now, if I think about it as I get older and I try and deconstruct the stories there are so many easier ways they would have done the movie.</p><p>&nbsp;If you watch the movie The fact that it overlays India's freedom struggle and that arc in there it overlays the the real tragedy of of, like some of the cost of not giving into corruption and sticking to principles where his daughter would suffer the consequences all the way leading to the end, right?</p><p>&nbsp;Like this movie could have been Easily not done with so much craft and so much design and wouldn't have had the impact that it has. I was used to this recently thinking about it because I would just Aarthi and I were watching this new Netflix movie called Rebel Rich it's like a fun action movie.</p><p>&nbsp;It may not be an all time classic, but it's still a fun movie. But I was thinking, but it's about somebody showing up at a town and fighting corruption. And I was thinking about, like, how much Indian got so many elements so right in a way, where it's, I would say it's a timeless classic.</p><p>&nbsp;If you haven't seen it, it's a 90s movie, go watch Indian. I think it's it introduces you to so many parts of Indian culture.&nbsp;</p><p>&nbsp;Anil: For sure. Thank you both for answering my question</p><p>&nbsp;Aarthi: Oh my gosh have have you considered a career in podcasting? You are so good at this. You should just join us.</p><p>&nbsp;We can call it the Aarthi Sriram and Anil show.&nbsp;</p><p>&nbsp;Anil: I like asking questions more than answering questions.&nbsp;</p><p>&nbsp;Sriram: I just want to say, first of all, thank you for the questions. They were delightful. And we might need to follow up with that, but thank you for coming on the show. I feel you've just done so many incredible, amazing things, but I almost feel like Silicon Valley or the world out to Silicon Valley hasn't discovered you yet.</p><p>&nbsp;And that's going to very rapidly change. So I want to, this is basically us leading the seed route. Coming out and becoming famous party, but I would say if it's, I wonder what we want to do in this episode. It's not just talk with you as a founder, right? Like fact that you and your brother built an enterprise company, which is just quite a cutting edge of design.</p><p>&nbsp;And that comes from so much taste as the call us and say, the fact that you've been funding all of these. Young people and had dramatically changed their lives. And so much more that I know you're doing behind the scenes. I think you're one of a kind. I am so happy. You finally agreed to do this after five years.</p><p>&nbsp;Hopefully it won't take five more years for you to come back to this again, but this was such a delight for us.&nbsp;</p><p>&nbsp;Anil: I had a lot of fun. Thank you guys for doing it.&nbsp;</p><p>&nbsp;Aarthi: Thank you.&nbsp;</p><p>&nbsp;Sriram: Thank you. And until next time. Bye. Yeah.</p>]]></content:encoded></item><item><title><![CDATA[EP 88 Silicon Valley legend Gokul Rajaram]]></title><description><![CDATA[WATCH: Youtube]]></description><link>https://www.aarthiandsriram.com/p/ep-88-silicon-valley-legend-gokul</link><guid isPermaLink="false">https://www.aarthiandsriram.com/p/ep-88-silicon-valley-legend-gokul</guid><dc:creator><![CDATA[The Aarthi and Sriram Show]]></dc:creator><pubDate>Mon, 28 Oct 2024 07:14:25 GMT</pubDate><content:encoded><![CDATA[<p><strong>WATCH: <a href="https://substack.com/redirect/c039947f-07e0-40a7-9ded-6e12387a0321?j=eyJ1IjoiMXZ0cXJuIn0.w0Ur6iDuSB1RP7xBXsq_IXTQ9Uys-qVYBOiWxhv-QQU">Youtube</a></strong></p><p><strong>LISTEN: <a href="https://substack.com/redirect/34c05b17-88b1-4bab-8d63-83e4963f9bb5?j=eyJ1IjoiMXZ0cXJuIn0.w0Ur6iDuSB1RP7xBXsq_IXTQ9Uys-qVYBOiWxhv-QQU">Spotify</a> | <a href="https://substack.com/redirect/b1675952-1da6-4bfa-8526-62ead6a45574?j=eyJ1IjoiMXZ0cXJuIn0.w0Ur6iDuSB1RP7xBXsq_IXTQ9Uys-qVYBOiWxhv-QQU">Apple</a></strong></p><p>There are only a few people in Silicon Valley who can be referred to with a mononym. Gokul Rajaram or just &#8220;Gokul&#8221; to us all is one of them. Gokul&#8217;s career accomplishments are the stuff of Silicon Valley legend. From helping create Google Adsense to driving Facebook ads and then Square and DoorDash, Gokul has done and seen it all. He is also a prolific investor in companies of all stages. More than all of that, Gokul is a friend and one of the people we call for advice often.</p><p>In this episode we cover it all. Gokul&#8217;s career. The current state of middle management and career advice for people in late stage companies. Founder Mode (!). Lessons learned from Zuck, Jack and Tony of DoorDash. Sriram and Gokul battle it out on the importance of titles. And much more. this was a blast, enjoy!</p><p>0:00 - Intro&nbsp;<br>1:10 - Meet Gokul Rajaram, on the Mount Rushmore of Silicon Valley<br>2:19 - Gokul's relationship with Aarthi and Sriram&nbsp;<br>4:00 - What is Founder Mode? And Zuck's Micromanagement&nbsp;<br>6:33 - Founder Mode Vs Management Mode&nbsp;<br>8:26 - Why Gokul disagrees with Paul Graham&nbsp;<br>10:56 - Learnings from Google&#8217;s Larry and Sergey&nbsp;<br>15:58 - What were early days at Google like?&nbsp;<br>18:58 - What makes a great CEO?&nbsp;<br>20: 42 - How to engage with high performing executives&nbsp;<br>25:45 - Why Gokul left Google to launch his own startup&nbsp;<br>28:53 - Chamath's Acquisition offer&nbsp;<br>31:30 - Fundraising advice for founders<br>34:13 - Gokul&#8217;s HOT take on AI startups&nbsp;<br>38:10 - The evolving consumer apps landscape&nbsp;<br>40:55 - Career Advice for young people&nbsp;<br>48:00 - When is the right time to start a company?&nbsp;<br>52:17 - Why titles don't matter&nbsp;<br>1:01: 45 - Why Gokul left Facebook to join Square&nbsp;<br>1:06:46 - Regret Minimization Framework&nbsp;<br>1:07:28 - Gokul's Best Bet - Doordash&nbsp;<br>1:13:18 - Doordash&#8217;s secret hiring process&nbsp;<br>1:16:11 - Sriram&#8217;s favorite interview question&nbsp;<br>1:17:30 - Gokul&#8217;s advice for first time founders&nbsp;<br>1:20:10 - Final Thoughts</p>]]></content:encoded></item><item><title><![CDATA[EP 87 Rio Ferdinand on life after Man U, what makes Ronaldo and Jude Bellingham great and more.]]></title><description><![CDATA[Now for a different kind of guest.]]></description><link>https://www.aarthiandsriram.com/p/ep-87-rio-ferdinand-on-life-after</link><guid isPermaLink="false">https://www.aarthiandsriram.com/p/ep-87-rio-ferdinand-on-life-after</guid><dc:creator><![CDATA[The Aarthi and Sriram Show]]></dc:creator><pubDate>Sun, 08 Sep 2024 12:02:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/oKseM0xX5lo" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Now for a different kind of guest.  We are lucky to be joined by Rio Ferdinand - former Manchester United captain.  This episode was a blast for socc&#8230;wait..FOOTBALL fans.</p><p><br>From the mindset of what makes someone like Cristiano Ronaldo great to what he sees in Jude Bellingham to lessons learned from Sir Alex Ferguson, Rio was a blast to talk to.<br><br>Enjoy!</p><p></p><div id="youtube2-oKseM0xX5lo" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;oKseM0xX5lo&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/oKseM0xX5lo?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div>]]></content:encoded></item><item><title><![CDATA[EP 86: The student who built a fusor in his bedroom using Claude walks us through his process.]]></title><description><![CDATA[The following X post probably showed up in your feed last week -a college student building a nuclear fusor (whatever *that* is) in his bedroom.]]></description><link>https://www.aarthiandsriram.com/p/ep-86-the-student-who-built-a-fusor</link><guid isPermaLink="false">https://www.aarthiandsriram.com/p/ep-86-the-student-who-built-a-fusor</guid><dc:creator><![CDATA[The Aarthi and Sriram Show]]></dc:creator><pubDate>Mon, 02 Sep 2024 15:32:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!R2gs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82ddd130-cd5e-42f0-b31f-ade512bb6ce3_1194x1232.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The following X post probably showed up in your feed last week -a college student building a nuclear fusor (whatever *that* is) in his bedroom. On top of that, using Claude 3.5 sonnet using some very interesting uses of AI.<br></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!R2gs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82ddd130-cd5e-42f0-b31f-ade512bb6ce3_1194x1232.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!R2gs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82ddd130-cd5e-42f0-b31f-ade512bb6ce3_1194x1232.png 424w, https://substackcdn.com/image/fetch/$s_!R2gs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82ddd130-cd5e-42f0-b31f-ade512bb6ce3_1194x1232.png 848w, https://substackcdn.com/image/fetch/$s_!R2gs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82ddd130-cd5e-42f0-b31f-ade512bb6ce3_1194x1232.png 1272w, https://substackcdn.com/image/fetch/$s_!R2gs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82ddd130-cd5e-42f0-b31f-ade512bb6ce3_1194x1232.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!R2gs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82ddd130-cd5e-42f0-b31f-ade512bb6ce3_1194x1232.png" width="340" height="350.82077051926296" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/82ddd130-cd5e-42f0-b31f-ade512bb6ce3_1194x1232.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1232,&quot;width&quot;:1194,&quot;resizeWidth&quot;:340,&quot;bytes&quot;:962826,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!R2gs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82ddd130-cd5e-42f0-b31f-ade512bb6ce3_1194x1232.png 424w, https://substackcdn.com/image/fetch/$s_!R2gs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82ddd130-cd5e-42f0-b31f-ade512bb6ce3_1194x1232.png 848w, https://substackcdn.com/image/fetch/$s_!R2gs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82ddd130-cd5e-42f0-b31f-ade512bb6ce3_1194x1232.png 1272w, https://substackcdn.com/image/fetch/$s_!R2gs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82ddd130-cd5e-42f0-b31f-ade512bb6ce3_1194x1232.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Aarthi and I spoke to HudZah and he walked us through<br>a) How he uses Claude and AI in general. He screenshares his desktop and walks us through. It is very different from how a lot of us use Claude and AI in general<br>b) How he went about building a nuclear fusor safely.<br><br>This was a blast, highly recommend you watching this one.</p><p></p><div id="youtube2-fK5U-ejFj0k" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;fK5U-ejFj0k&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/fK5U-ejFj0k?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[EP 85: SB 1047 passes the CA house - we discuss it and AI safety with a16z's Martin Casado.]]></title><description><![CDATA[SB 1047 just passed the California assembly.]]></description><link>https://www.aarthiandsriram.com/p/sb-1047-passes-the-ca-house-we-discuss</link><guid isPermaLink="false">https://www.aarthiandsriram.com/p/sb-1047-passes-the-ca-house-we-discuss</guid><dc:creator><![CDATA[The Aarthi and Sriram Show]]></dc:creator><pubDate>Wed, 28 Aug 2024 22:59:01 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/Wbdlz-pvdVg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div id="youtube2-Wbdlz-pvdVg" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Wbdlz-pvdVg&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Wbdlz-pvdVg?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><br><br>SB 1047 just passed the California assembly. The proposed legislation in California impacting large AI models -  has sparked fierce debate and criticism. You&#8217;ve probably seen multiple op-eds, tweets, articles and as we write this, it just passed the California Assembly with minimum votes a few minutes prior.</p><p>I (Sriram) need to come clean here : I&#8217;m not a fan of SB 1047. I believe it to be harmful to startups and AI model development and can have vast negative consequences for the AI ecosystem in California. However, I do acknowledge that others (especially on X) disagree with me.</p><p>To cover all of this, we brought on one of the people who has really lead the charge against SB 1047 and happens to be one of the leading investors in AI - and a partner and a friend - Martin Casado, General Partner at a16z. Martin has emerged into one of the key voices in this conversation and in my view, has made it his personal mission to try and educate people on the harms it bring. Note: this was recorded last week.<br><br>We cover a lot of ground on SB 1047 and AI safety in this conversation.  I try - with my biases - to steelman some of the arguments from the &#8220;other camp&#8221; of AI safety. We talk about how former Speaker Nancy Pelosi and other promiment Democrats and Republicans have come out against the bill. We talk about some of the key individuals involved from California State Senator Scott Wiener and Dan Hendrycks to others in the AI safety world like Max Tegmark. We talk about the specifics of the bill and also the *spirit* of it.  This turned into a fascinating - and timely - conversation not just as this bill heads to CA Gov Newsom but also as a bell weather for AI regulation around the world.<br><br><a href="https://www.youtube.com/watch?v=Wbdlz-pvdVg&amp;t=150s">2:30</a> - What is SB1047?<br><a href="https://www.youtube.com/watch?v=Wbdlz-pvdVg&amp;t=250s">4:10</a> - What is the origin of SB1047 bill?<br><a href="https://www.youtube.com/watch?v=Wbdlz-pvdVg&amp;t=374s">6:14</a> - Should AI be regulated?<br><a href="https://www.youtube.com/watch?v=Wbdlz-pvdVg&amp;t=816s">13:36</a> - Who is funding this bill? &#8216;Baptists vs bootleggers&#8217; and Nick Bostrom&#8217;s Superintelligence book being the origin point.<br><a href="https://www.youtube.com/watch?v=Wbdlz-pvdVg&amp;t=995s">16:35</a> - Scott Wiener&#8217;s support of the bill<br><a href="https://www.youtube.com/watch?v=Wbdlz-pvdVg&amp;t=1174s">19:34</a> - Open source benefitting software world and risks to open source due to this bill<br><a href="https://www.youtube.com/watch?v=Wbdlz-pvdVg&amp;t=1330s">22:10</a> - Are large models more dangerous?<br><a href="https://www.youtube.com/watch?v=Wbdlz-pvdVg&amp;t=1450s">24:10</a> - Is there a correlation between size of models and risk associated?<br><a href="https://www.youtube.com/watch?v=Wbdlz-pvdVg&amp;t=1605s">26:45</a> - How would Martin frame any regulations on AI? What&#8217;s a better way?<br><a href="https://www.youtube.com/watch?v=Wbdlz-pvdVg&amp;t=1726s">28:46</a> - Nancy Pelosi opposes the bill. Some famous researchers are for the bill. Who comprises the two opposing camps and what is the motivation? <br><a href="https://www.youtube.com/watch?v=Wbdlz-pvdVg&amp;t=1990s">33:10</a> - Why does Pelosi oppose the bill?<br><a href="https://www.youtube.com/watch?v=Wbdlz-pvdVg&amp;t=2100s">35:00</a> - Leopold Aschenbrenner and &#8220;Situational Awareness&#8221; paper <br><a href="https://www.youtube.com/watch?v=Wbdlz-pvdVg&amp;t=2240s">37:20</a> - Non-system people viewing systems - computer systems are not parametric but sigmoidal.<br><a href="https://www.youtube.com/watch?v=Wbdlz-pvdVg&amp;t=2490s">41:30</a> - AI is the ultimate &#8216;Deus ex machina&#8217; (God in the machine)<br><a href="https://www.youtube.com/watch?v=Wbdlz-pvdVg&amp;t=2760s">46:00</a> - Anthropic&#8217;s investment in AI safety <br><a href="https://www.youtube.com/watch?v=Wbdlz-pvdVg&amp;t=2895s">48:15</a> - If you&#8217;re a AI founder, what can you do about this bill today?<br><a href="https://www.youtube.com/watch?v=Wbdlz-pvdVg&amp;t=3000s">50:00</a> - Why is this bill a personal issue for Martin?</p><p></p><p>Enjoy!</p><p>- Aarthi and Sriram</p><h2>Transcript:</h2><p><strong>&nbsp;00:43 - Welcome Martin Casado of a16z</strong></p><p>[Sriram Krishnan] (0:43 - 2:45)</p><p>Ladies and gentlemen, we have a very special episode for you here today. One of the most interesting and talked about topics in the technology industry, especially in Silicon Valley over the last several months, is something you might have heard of as <a href="https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047">SB1047</a>, or to use the expanded name, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. Now, we're going to have a lot of questions in there.</p><p>So this is a bill, a draft bill that has been proposed, which honestly, you know, I don't AI, which honestly, you know, a lot of people, including me and many others I work with, think is going to be quite harmful. One of the people who's been leading the charge on talking about this is my partner and friend, Martin Casado. Martin leads a lot of our AI and infrastructure investing.</p><p>Honestly, one of the nicest people I know, but he's really worked up about this topic. Now, in the spirit of honesty, I would say I kind of established a bit of my Bayesian priors, if you will. I am not a fan of this act.</p><p>I think it's quite harmful. For all these, I think, you know, Martin and others will get into, but I want to try and have as much as possible, bring some of the other opinions and takes because this is a little bit of a heated discussion. So what I'm going to try and do over the next hour or so is ask Martin a bunch of stuff about this act, why he's so worked up on it, you know, what his views are, but also try and have discussion, you know, from others who, you know, maybe agree with the act or have other points of view.</p><p>Let's see where this goes. Should be a fun discussion. But with that, Martin, we never come on the show, you know, thank you.</p><p>&nbsp;[Martin Casado]</p><p>Thanks for doing this. I'm so glad to be here. It's going to be so much fun.</p><p><strong>2:30 - What is SB1047?</strong></p><p>[Sriram Krishnan] (2:30- 2:45)</p><p>All right. Maybe one good place to start is for our audience who may not be, you know, have been paying attention or may not be paying attention too deeply. Could you give a quick overview of what SB 1047 actually is?</p><p>[Martin Casado] (2:46 - 4:11)</p><p>Maybe the most important thing to realize is this has been such a moving target. It evolves almost daily. And it actually just changed again in the last couple of days.</p><p>So I think probably the best way to describe it is to provide an over a general overview of this spirit rather than the letter, because the letter is really in flux. So the spirit of the bill is, is that if you're working on state-of-the-art AI models, and AI is a very definition of a vague definition. If you're working on state-of-the-art AI models that are over a certain threshold, right now it's a hundred million dollar training run.</p><p>Then two things happen. One of them is you should do some reporting to a state agency and where this agency sit has moved around on what you're doing to keep it safe. And then another one is if somehow it is the result of some catastrophic harm, then there is some liability if you did not do kind of best practices to keep it safe.</p><p>And there's a lot of details around this such as, well, okay, if it's open source, then it only applies if you're fine tuning the model over $10 million and a bunch of details. But I think for the purposes of the start of this conversation is models over a certain level are audited by a state entity somewhere. And there is some liability if you don't do best practices around safety.</p><p><strong>4:10 - What is the origin of SB1047 bill?</strong></p><p>[Sriram Krishnan] (4:12 - 4:32)</p><p>Maybe I think one good place to start this is, where did this even come from? So this is a California draft bill, right? And, you know, I would say in some ways, you know, it seemed to have come out of nowhere or maybe caught a lot of people by surprise.</p><p>Do you remember when you first heard of this and what your original reaction was?</p><p>[Martin Casado] (4:32 - 6:00)</p><p>So I actually, this is kind of funny, I've been trying to think about when I did first hear of it. So here's where I think it actually came from. And then maybe I'll answer the personal question.</p><p>California wants to be the EU when it comes to tech policy. We saw this with GDPR, right? So like the EU goes and does something, which so often tends to be a terrible idea.</p><p>Like GDPR was probably the best thing that could ever happen to like this social networking monopolies, because all of a sudden it makes it hard for startups to compete. And so EU was looking at passing kind of AI safety stuff. And then of course, California wants to Now at the time that they did it, there was this executive order from the Biden administration that they sort of mimicked.</p><p>But since then, the federal government has really softened their stance and changed their posture on this. But California has not. So California and Scott Weiner in particular is uniquely pushing probably the most kind of comprehensively negative for innovation effort, even though the federal government has softened its stance.</p><p>And maybe the only analog is the EU. But I would say even in that case, it's not working. Now, as far as when did I hear about it?</p><p>This was kind of kept pretty secret until pretty late in process. And so I've been involved in a number of AI safety discussions. I went to the Chuck Schumer hearings in DC.</p><p>And so somewhere along the way, the fact that we had kind of the most pernicious bill in California jumped up. And then I just realized that was probably the best use of my attention and efforts.</p><p>&nbsp;<strong>6:14 - Should AI be regulated?</strong></p><p>[Sriram Krishnan] (6:00 - 6:58)</p><p>Like I said in the beginning, I think this bill is a very bad idea, but I'm going to try and throw at you some of the arguments from people who may believe in this, or maybe believe in some of the risk posed by AI. I guess the first question would be, there are a lot of industries which have regulation, right? You know, the airlines industry, for example.</p><p>And, you know, why and you know, why should or shouldn't AI be regulated? Because from a very 10,000 foot level, this seems to be, you know, you mentioned $100 million runs, you mentioned a flops limit. This seems to be like, hey, you know, only if some of these really, really bad things happen, and only for these really, really maybe large companies, which can maybe afford these training runs, we need some level of regulation.</p><p>Now at a very 10,000 feet level, I can see why people might think that sounds reasonable, because I see other industries how to react to this.</p><p>[Martin Casado] (6:58 - 10:16)</p><p>Yeah, no, and I totally agree. In fact, AI systems or software systems, which have a very rich and robust regulatory regime that has been developed over 30 years, to which they're under, many of which have been passed in California. So I think it comes down to the following, which is if you want to provide new regulation on top of software or a system, I think you want two things to be true.</p><p>The first one is you want to make sure that you're in line with the existing regulatory regime, because you have to work within that, and there's a lot of lessons learned. And the second one is you want to understand the marginal shift, the marginal risk. Like, was there a paradigm shift that necessitates new regulations?</p><p>Yes or no? So in the case of AI, neither in SB1047, neither of these are true, right? So for example, it actually changes the doctrine for how we approach AI regulation, and it throws out 20 years of discussion on things like liability and open source and size limits.</p><p>And not only that, it actually points to things that we've just shown not to work. Like, we had size limits on compute in the late 90s, and that was such a laughably bad idea that just basically went away. I'm old enough to remember the rise of the internet and the rise of the web and what that did, right, to the policy regime.</p><p>In fact, I was at Stanford and helped create and co-teach a course on cybersecurity policy, right? I did this with William Perry, who was the Secretary of Defense for the Clinton administration. Now, in the case of the internet, the big question was, is there a paradigm shift here that necessitates new regulations?</p><p>And you can make two very strong arguments that there weren't. The first one, there was this notion of asymmetry. And the notion of asymmetry was the more you rely on this stuff, the more vulnerable you are.</p><p>So if you're in a conflict with another, say, nation state or state actor who's not reliant on it, you're more vulnerable. So that's very different than mutually assured destruction. So now we've got this big kind of risk posture difference.</p><p>The second one is, we actually had examples of risks. So we actually had examples of new types of attacks. So there's very famously a worm, the Morris worm, which took out actual computers and critical infrastructure.</p><p>And so at the time we were having that discussion, we're like, actually, you know, there is a good argument for a paradigm shift. There's a good argument for marginal risk. And then you actually point from first principles and you actually point to very specific instance.</p><p>And even in that case, the regulation that we came up with is much more even-handed pro-innovation than what we're doing with AI. Now, in the case of AI, we have neither of these things. If you go to, you know, say, professors at Berkeley who are working on this, they'll say, it's very important we understand the marginal risk, but we don't understand it yet.</p><p>So how do you create a policy for something where you don't even understand the risk? And we're, you know, let's call it four or five years into this. And we have not seen any demonstration of, you know, new types of threats that you couldn't do with existing software systems.</p><p>And so, again, I know this is a long-winded answer. I'm just going to summarize it very quickly. A hundred percent, we should regulate, you know, industries.</p><p>A hundred percent, we should focus on safety. Software is no exception. We have that for software.</p><p>AI is covered under that existing regime. And we've not done the work to demonstrate that this requires new regulation.</p><p><strong>10:20 - Is AI a paradigm shift?</strong></p><p>[Sriram Krishnan] (10:16 - 11:20)</p><p>On the new paradigm shift, there could be an argument that could be made that attention, transformers, everyone's mind being blown by chat GPT and, you know, all the funding, you know, some from the place that we work at and others and the, you know, and the attention going into these large language models and training them kind of constitutes a shift, right? Like, and, you know, it's probably fair a lot of people, including, you know, I would say maybe both of us, like, we're very optimistic about what's technology can bring. No, I don't know that quantifies a paradigm shift.</p><p>It definitely seizes the world's imagination and attention. So maybe that's one argument. The other argument I would say is, while I would agree with you that we haven't demonstrated AI this today, you could make an argument that, well, this bill is only talking about future potential catastrophic scenarios.</p><p>So in a way, why do you even care today? Right? Like this is talking about scenarios, which, you know, maybe, you know, most developers won't hit and it's only for the largest of companies.</p><p>So why do you care so much about these things?</p><p>[Martin Casado] (11:21 - 11:33)</p><p>Yeah. So the first, let's talk to the first one. So what I mean, paradigm shifts, I've been in the risk profile, not in the technology.</p><p>We're venture capitalists. We see paradigm shifts constantly and we don't create new regulations for ourselves. I mean, this is what we do.</p><p>[Sriram Krishnan] (11:34 - 11:34)</p><p>Yeah.</p><p>[Martin Casado] (11:34 - 13:34)</p><p>What you need to argue is, is there like this, this changes what you can do relative to software connected to a network. I mean, that really is the bar and nobody's shown that. And like, really like, like some of the premier experts take Don Song, right.</p><p>Who is actually very much pro safety. You know, one of the people behind this bill is Dan Hendricks, who is in the center for AI safety. His advisor will say it's too early to do policy.</p><p>His advisor who thinks AI safety is very important. It's too early. We don't know what the marginal risks are.</p><p>Let's understand the marginal risk is still a research question. So when I talk about that, I'm talking about marginal risks, not a paradigm shift in technology. Right.</p><p>So the second question is there's two answers. The first one is it actually does apply today. So if the SB1047 was enacted today, you could argue that Meta would not release their open source models.</p><p>We actually saw this in the EU where they decided not to release the models in the EU because they're similar, similar legislation. And the reason for that is because their training runs are over a hundred million dollars and there's so much ambiguity around the liability, right? It is such a poorly written bill.</p><p>Nobody knows what it means that it changes the risk profile for these large companies to release open source. I know we haven't talked about open source yet, but just, you know, for those that are not follow this stuff daily, it's very important to the startup ecosystem to have that out there. And so, and by the way, I would say the second point to this is technology evolves very quickly.</p><p>So even though it does matter today and it does have impact today, even if that wasn't the case, at some point we catch up to it quickly. And the greatest example of this is the executive order where they had this silly flop number 10 to the 26. So they said, if the models get to this size, 10 to the 26, then they should be under this set of regulations.</p><p>And the industry caught up to that number within a year and a half. And then, and then of course, you know, like this kind of idea, this is some great future thing. And even small companies caught up to this number very quickly.</p><p>So like our ability to understand how quickly technology evolves is very poor.</p><p><strong>13:36 - Who is funding this bill? &#8216;Baptists vs bootleggers&#8217; and Nick Nostrum&#8217;s Superintelligence book being the origin point.</strong></p><p>[Aarthi Ramamurthy] (13:35 - 13:51)</p><p>You had mentioned California trying to one-up you. What do you think is sort of the motivation for SB1047 for this entire suite of consistent AI safety regulations coming in? Who's funding this?</p><p>What do you think is really going on?</p><p>[Martin Casado] (13:52 - 16:02)</p><p>So I think this is the classic Baptist and Bootleggers that Marc likes to talk about. I think this is the case here. I mean, there are a number of people who really believe in AI existential risk.</p><p>And it was very interesting. If you track it back, a lot of this belief predates things like Transformers, Sriram, and things like ChatGPT. It's actually, a lot of it's rooted in Nick Bostrom, who's a great philosopher.</p><p>So Nick Bostrom was a philosopher in Oxford. He wrote this book called <a href="https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies">Superintelligence</a>, which is actually great. I love the book.</p><p>But it talks about a platonic ideal of AI. It doesn't talk about any specific system. It's kind of almost like a thought experiment.</p><p>And that very famously, actually, if you read the Walter Isaacson book on Elon Musk, that very famously created this kind of whole movement to protect humanity against AI. And I think it was kind of always viewed as this almost a lark for rich, smart people. And nobody took it really seriously.</p><p>Then open AI started actually showing very promising results. And then somehow these two things got conflated, which is all of the concerns from Bostrom's existential platonic view of AI got wrapped into actual systems. It turns out there's pretty large funding apparatus behind this, like Eric Schmidt very famously, Dustin Muskowitz very famously, Reid Hoffman to some extent.</p><p>And so they started deploying, Jan Tallinn, they started deploying a lot of money. And so again, Eric Schmidt was behind the executive order. And some of that money went to an institute called the California AI Institute.</p><p>And then listen, their stated goal is to protect humanity from existential risk. You just go to the website, right? So you've got billionaire funded that comes from this kind of Doomer line that goes back to Bostrom and very clearly starts prior to Transformers or any of that stuff.</p><p>And I think they really believe. I think that the whole, they're tied up with the effect of altruism. I think they really believe.</p><p>I think they know better than, they believe they know better than we do and they're smarter so they can protect us and they can come up with rules. And so there's this big funding apparatus. Now, as far as the bootleggers, I have a hard time not teasing it apart from the primary person backing this bill, which is Scott Wiener, which is he doesn't.</p><p>[Sriram Krishnan] (16:02 - 16:10)</p><p>So by the way- But this is California Senator Scott Wiener, who maybe a lot of you may have heard of him, but he's been at the center of this.</p><p>[Martin Casado] (16:10 - 17:01)</p><p>Yeah, that's right. Yeah. He's the one that's basically writing and sponsoring the bill.</p><p>The one last thing I want to say about CAIS before I go to Scott Wiener, because these are the two kind of prime actors behind this. CAIS is by a guy named Dan Hendricks, who's a great actually AI researcher. And he's done a bunch of great work and I've been a fan of his work.</p><p><strong>16:35 - Scott Weiner&#8217;s support of the bill</strong></p><p>Now, Senator, California Senator Scott Wiener is the one that's kind of backing this bill. And by the way, so prior to all of this, I was a huge fan. He's done great work on housing.</p><p>He's done great work on kind of LGBTQ issues. I've been a huge fan. He clearly has no background on this stuff.</p><p>And he has now kind of taken it as his raison d'etre in the face of tons of opposition. I can't imagine he believes, I think he believes AI regulation needs to exist. I don't think he has any idea why this is so bad.</p><p>[Sriram Krishnan] (17:01 - 17:21)</p><p>On the other side, I would say there's been an alliance of people who've been speaking up against it. I saw this tweet or post the other day, which talked about Anderson Horowitz and these tech broke billionaires. So I want to ask you two questions.</p><p>The second part of the question is, who's kind of the people speaking up against it? And the first part is like, Martin, when did you become a billionaire? I just did.</p><p>[Martin Casado] (17:21 - 17:22)</p><p>I know I got promoted.</p><p>[Sriram Krishnan] (17:22 - 18:37)</p><p>I think Bostrom, by the way, kind of recanted some of this and he actually maybe regrets what he said into motion now. It may be a bit too late. I want to zoom in on some of the details, right?</p><p>Because I do think there are some set of people like, for example, look at Zvi Moskowitz. I look at some of the people who might work at it on topic or an open AI or some other companies who find this thing reasonable and it gets lost in the details. I want to get focused a little bit on the details.</p><p>The first is the 100 billion number, right? So if I'm kind of, I would say very high level, the bill basically says that if you're not covered, this bill doesn't apply to you at all. Unless your training model takes $100 million, you spend $100 million in training it or $10 million in fine tuning it, right?</p><p>And I guess the first question is, okay, if that is a case, sure, Meta may be impacted, but the vast majority of startups may not be impacted. It's only like a few big guys. So if I'm a startup founder, if I'm everyone else, why do I care?</p><p>Maybe Meta just deals with this. They have a bunch of lawyers and open AI and a couple of everyone shouldn't be impacted. So why is that number a reasonable sort of like big guy cutoff?</p><p>[Martin Casado] (18:37 - 20:31)</p><p>Yeah. So I have one, I would say fairly biased view on this. And then one, I think is fairly neutral.</p><p>My biased view is in this game of AI, even for startups, $100 million is not a lot. I personally work with a number of companies that will be doing training runs of that size and they're private. They're not Meta.</p><p>And anybody that's paying attention to this industry realizes that this is just not that high of a bar, even for private companies. Okay. So that's a biased view clearly.</p><p>And I think people can take issue with that view and say, it's not a startup, et cetera. The other view is, well, there's a second view, which is very hard to actually know how to account for that $100 million, which is, does that just mean one single run? Is that multiple runs?</p><p>Like these things tend to have drops along the way. Does that mean like if there's one model that you've been working on for five years and then you've raised, let's say 130 million, 80% of that is GPUs. And this is a result of like, that is a pretty modest, that's like a series C startup.</p><p>Like we're not talking about something very large. We're talking about a pretty modest startup. And then if you go away from the startup ecosystems and say somehow private companies are just exempted, even then one of the big problems is, is so much of the Silicon, the tech ecosystem has benefited from the releases of open source from large companies like Google and like Meta, right?</p><p><strong>19:34 - Open source benefitting software world and risks to open source due to this bill</strong></p><p>I mean, you could argue that the cloud wouldn't have happened. You could definitely argue that mobile phones like Android wouldn't have happened if you didn't have these releases, right? So every set of the software stack has benefited from open source and it's been the lifeblood of innovation, private innovation.</p><p>And now if there are liability and reporting constraints and ambiguity, you're less likely, these large companies are less likely to release those. And as a result, researchers aren't able to benefit, academics aren't able to benefit, and certainly startups aren't able to benefit from this. And again, we've actually seen this happen in the EU where Meta decided not to release it because of ambiguity around the laws.</p><p>[Sriram Krishnan] (20:31 - 20:48)</p><p>Maybe this cuts to something at the heart of it. By the way, I'm happy you addressed the number because I think the idea is that, like the number can be fixed. Like it's not like, you know, Scott Wiener can be like, well, let's make the number 200 million or 300 million.</p><p>Like I think the number could be like fixed. I think there's really about, we need open source in there, but I guess there is, we've been dancing.</p><p>[Martin Casado] (20:48 - 21:23)</p><p>Well, sorry, let me just say one more thing. So maybe you're going to get to this before, but so then I'll just tease it up and we can get to it later, which is there isn't a shred of evidence, not one that safety has anything to do with the number of blobs that went into a training run. So for example, I could probably use a million dollars to train something with a bunch of classified information to do something very dangerous, a million bucks.</p><p>And I could use a hundred million dollars to train something that which is totally benign, right? These things, these are orthogonal axes and it's just bad policy to try and conflate the two.</p><p>[Sriram Krishnan] (21:23 - 22:27)</p><p>I guess actually this is a good segue into something I want to get at, because I think we can have this discussion on two levels. One is let's call it the implementation details, the millions of dollars, the flops number. But I guess there is a spirit of the exercise here, right?</p><p>Which is if, and I would say kind of sum it up and I think folks would disagree with me and you would probably agree with this. If you're releasing a very, very, very large model, like a very sophisticated, but large model, right? You ought to be rude to make sure you have a safety plan and maybe run some checks and maybe be held to the same kind of liability that a lot of other industries are held to.</p><p><strong>22:10 - Are large models more dangerous?</strong></p><p>If only for the largest of people, right? Now we can maybe debate on the ambiguities and maybe imagine a world where these ambiguities are resolved or you have the deficits there, but I guess there's a spirit of the exercise here. And I guess the question to you, Martin, would be, do you agree with the core premise or the spirit of what this bill is trying to achieve?</p><p>[Martin Casado] (22:27 - 22:39)</p><p>Well, can I ask you a question? Because you have this assumption that I just don't buy, which is you say, if you're working on a large model, this should happen. Why do you think a large model is more dangerous?</p><p>[Sriram Krishnan] (22:39 - 24:10)</p><p>Well, I would say that, I was going to try and get this later, but I would say that anybody who used ChatGPT had their minds blown. That's the simplest level. And we have seen since ChatGPT with the advancements from Claude and Llama and all these folks, that these models seem to be getting better.</p><p>Now we can kind of debate which benchmark and so on, but I would say the models we have today are better than the models we have a year ago. And I think one argument could be, if they are getting more sophisticated, how do we ensure that this sophistication doesn't go off in a direction that creates risk? For example, what if this sophistication doesn't go off in a direction where you're like, hey, model, jailbreak out of your sandbox environment and go do some naughty things out there on the internet or go create a new neurotoxin or whatever.</p><p>And the idea would be that if it's only the SOTA, the really largest models are the most sophisticated. And given that there is debate on whether that could be a risk or not, why don't we just play it safe? And only for the largest, most sophisticated models, which take a lot of money and resources, just given the amount of GPUs and computer takes, let's put them to a small set of checks.</p><p>I guess that would be maybe the strong form version of the argument. Great.</p><p><strong>24:10 - Is there a correlated between size of models and risk associated?</strong></p><p>[Martin Casado] (24:10 - 26:36)</p><p>So let's just inspect that a little bit because I think this is a great one. So let's just say that you put this policy in place. What do you think is more dangerous?</p><p>So we actually have a lot of experience with these models. We kind of know how they work. We kind of actually know specifically how the mechanics work.</p><p>So let's say you put this in place and someone decides, you know, a hundred million dollars threshold, you know, whatever. I'm not to that, but what I'm going to do is I'm going to actually build a model to actually do, to create neurotoxins. That's what I'm going to do.</p><p>By the way, it turns out that these large models can't do novel things like that because they do basically averaging. They do smoothing. I mean, this is basically the whole neurotoxin thing has been debunked for this reason.</p><p>And it's kind of a very interesting- You just dropped in- No, no. I'm going to work back there, right? Early on, someone actually, it was so funny, someone showed like, oh, this came up with a novel neurotoxin.</p><p>And probably the best person in the world to talk to this is Vijay Pande, who's another partner of ours, but like I said, he's a Stanford professor in bio and in informatics, right? So he really knows this. And he looked at all the components of this, quote unquote, novel neurotoxin and they all had high toxicity.</p><p>So I was like, sure, if you take a bunch of toxic things and put it together, it's still toxic, right? And so he goes to me, he says, Martin, he says, that's like building an airplane that doesn't work. It's like, you don't need AI to do that.</p><p>Clearly, you can kind of aggregate a bunch of stuff. So there's no shred of evidence, no shred that if you're just throwing in a bunch of data at scale, you get these emergent properties. On the other hand, there's extreme evidence that if you do something much smaller and much more targeted, much more focused, you could create something that creates novel weapons, for sure.</p><p>You use data that's sensitive, you do like alpha folds type stuff. Now you've lost all your generality. It's almost the anti-scale argument.</p><p>So this is a great example of where scale actually is saving you because you're averaging out the answer. And actually a very focused effort would be far more dangerous on a specific example that the AI dimmers use. So this policy is just wrong-handed.</p><p>It's almost like punishing you for doing something that's more safe and allowing you to do something that's less safe. And it's one of their examples. And by the way, there's so many of these.</p><p>So I just want you to, there literally is no connection between size and risk. And the one example that you use, which allowed me to live on it, thank you very much for bringing it up, kind of shows that, I mean, it just turns out that as you get more general, you get more dangerous and you also get smaller. It's almost like a negative relationship.</p><p><strong>26:45 - How would Martin frame any regulations on AI? What&#8217;s a better way?</strong></p><p>[Aarthi Ramamurthy] (26:36 - 27:11)</p><p>Going back to what Sriram had said, right? Like there's like the implementation of it when it's flops the right way to go look at it. Maybe not.</p><p>Then there is like, should we even do this? And maybe to set aside like the size versus like size better proportional to risk. If you had to do this, I guess, Martin, like how would you come up with any sort of like framing or regulations here?</p><p>Because I think right at the beginning, you'd said similar to say airline industry or anything else. There is validity in some sort of regulation here. So how would you think about framing it?</p><p>[Martin Casado] (27:11 - 28:43)</p><p>Yeah. So I think I fall in the same camp that Don Song does, who's again, Dan Hendrycks advisor or Ian Stoica or any of these professors, which is like right now is the time to really understand marginal risk. It's very, very important that we understand marginal risk.</p><p>And I do think that I would fund a lot of research efforts to understand marginal risk. So that's one. The second one is we do have a very robust regulatory regime for software.</p><p>I would make sure that like whatever works like that, it adheres to. And then if I was to start looking at ways, if I was worried that the research wouldn't catch up in time, which may be the case, I would start focusing on applications and I would start regulating the applications of these things. For example, deepfakes, I think has potential for political and social consequences.</p><p>So these are the things we can absolutely study and look at and decide we're going to regulate, which by the way, you don't need a hundred billion dollar model to do deepfakes. CSAM is incredibly important, right? This is a child safety and abuse, right?</p><p>So if there's any potential use for that, I want to understand that study and make it totally illegal. I actually think like data and access and privacy is totally orthogonal to scale also is something that is worth looking at. Like, listen, is my personal data in these models that someone can divulge?</p><p>Like this is something we've got robust policy framework around and we may need to extend that too. So there are these very kind of applied that are adjacencies that we understand that actually solve the problem that I think we should a hundred percent look at. This is not what's happening with SB1047.</p><p>Well, SB1047 literally came out of nowhere from people that don't understand what they're doing. And we're going to have to live with the consequences if it passes.</p><p><strong>28:46 - Nancy Pelosi opposes the bill. Some famous researchers are for the bill. Who comprises the two opposing camps and what is the motivation?</strong></p><p>[Sriram Krishnan] (28:43 - 29:08)</p><p>Yeah. I want to ask you about some of the dynamics politically. By the time we're recording this, a few days before a speaker, a former speaker, Nancy Pelosi came out and basically really criticized the bill for a bunch of reasons.</p><p>So could you tell us like maybe what happened there? Because it doesn't seem like even the democratic party agree with each other and you have like a former speaker coming in and saying like, this is not a good idea.</p><p>[Martin Casado] (29:08 - 29:25)</p><p>So here's the thing, there's very few voices in favor of this, right? I mean, there's a set of personalities who are kind of well-known in favor of anything that's kind of Doomer, right? It's anything regulation.</p><p>It's Geoff Hinton, you know, who is a Canadian academic, and he's a Twitter award winner.</p><p>[Sriram Krishnan] (29:26 - 30:25)</p><p>Yeah, maybe, can I interrupt you there? Because I think that, let me make the strong form case where some of the other folks on the other team, so to speak, right? And some of these, I think we know, I'm friendly with, like, for example, like some of the folks in the EA world.</p><p>Let me take a look. One is you said that there are people who are doing this, you know, who are maybe not well-informed. The counter-argument to that would be, this has support from Geoffrey Hinton, Yoshua Bengio, I would say, Anthropic, and I don't pick on them, but there are others who are maybe working in people who are at once in a state of the art, who are maybe have issues, but definitely have sympathetic with the broad notions of AI safety.</p><p>So I guess the first question would be, yes, I mean, you're talking about people who have debunked it, but they're definitely very, very credible people in terms of where they've worked in AI or where they are right now, who seem to be broadly sympathetic with the idea of these large models contributing to risk, even if they may be having issues with this particular bill. Like, how do you think about it? So do you get multiple camps here?</p><p>[Martin Casado] (30:26 - 30:42)</p><p>Yeah, for sure. Listen, there's definitely differences of opinions, right? You've got two Canadian academics who have no experience in tech policy at all, who are weighing in, and they're not, by the way, accountable at all to whatever happens because they're in Canada.</p><p>They can have opinions without actually having to suffer the consequences.</p><p>[Sriram Krishnan] (30:42 - 30:45)</p><p>That was a Canadian audience right there. No, but it's true, right?</p><p>[Martin Casado] (30:45 - 30:53)</p><p>I mean, listen, it's all fun and games until you push regulation on somebody else that you're not accountable for, right?</p><p>[Sriram Krishnan] (30:53 - 30:58)</p><p>Well, there's a second episode where you insult a Canadian audience. There's another one too. We just launched over here, but sorry.</p><p>[Martin Casado] (30:58 - 33:01)</p><p>Listen, I was very young in Montana, and that's like the Canada of, you know, I grew up in Montana. It's like the Canada of the United States. I feel like they're poor brethren.</p><p>You know, you've got Max Tegmark. You've got Stuart Russell has been on this for a very long time.</p><p>He at least is in California. That's great. You have Larry Lessig who is at Harvard, you know, and by the way, he's not like, you know, his work is like in copyright.</p><p>Like I was at Stanford and he's doing the creative common stuff, which was great, but that's not like AI safety or risk. Like that's just not his thing, right? So those are the voices that are for it.</p><p>Notice most of them are out of California except for Stuart Russell, and they're the same. They're basically the same voices. And like, listen, that's like a legit opinion.</p><p>You can have it. And so what do you do in these situations? Well, maybe you should stack it up of all of the voices that are accountable and absolutely as credible.</p><p>So for example, Yann LeCun, not in California, but he does work for Meta, which is a California company. That's something. He also is a Turing Award winner and he thinks this is total nonsense.</p><p>But then we actually go to people within California, like, you know, Fei-Fei Li, who's one of the most notable people in all of AI. She's called the godmother of AI. You've got Andrew Ng who's like, you know, one of the top ML researchers of all time, Stanford professor.</p><p>And so, listen, I don't think it's possible to have a bill where nobody supports it. But if you look at people who are accountable, people have diverse experiences, broad base, you will see that the opposition, SB1047, has orders of many new poor people against it in various walks. It's investors.</p><p>It's founders. It's politicians, which you mentioned with Pelosi. It's academics.</p><p>It's students. It's researchers. I mean, the outcry has been enormous.</p><p><strong>33:10 - Why does Pelosi oppose the bill?</strong></p><p>[Sriram Krishnan] (33:02 - 33:26)</p><p>I just got to dig into a couple of things. I want to get back to former Speaker Pelosi, because why does she weigh in? And I guess the second part of it is, why is this a state-level effort versus a federal effort, which is another strain of argument which has come in.</p><p>How do you think about, one, why is Pelosi? Because it seems like there's a lot of disagreement even in the Democratic Party over this.</p><p>[Martin Casado] (33:27 - 34:02)</p><p>Listen, I think Silicon Valley is a wonder of the world. I really do. And I think that it has been for a long time.</p><p>Every major super cycle for the last 30 years was rooted in here. And bills like this will harm that. And I think Pelosi, who lives in San Francisco or has a house in San Francisco and is a native, understands that.</p><p>And I think Ana Eshoo, who also went against this, understands that. I think Ro Khanna, who's also in San Francisco, understands that. And I think Zoe Lofgren, who is in Monterey, but also represents the area, understands that.</p><p>And so I just really believe that these politicians know this is bad for a marvel and wonder of the world.</p><p><strong>35:00 - Leopold Aschenbrenner and &#8220;Situational Awareness&#8221; paper</strong></p><p>[Sriram Krishnan] (34:02 - 36:16)</p><p>I totally agree. You know, Martin, I agree. And I think, in some way, this is sort of the...</p><p>In H.P. Lovecraft novels, right, there is this... I have a point. I have a point.</p><p>I'll get to it. Trust me, right? Trust me, this is a metaphor.</p><p>You have this existential horror, which is basically a projection of these malevolence, which exists in a different universe, different galaxy, but kind of projecting into our world. Right? And I think of this bill as a projection of a lot of the EA versus optimism debates, right?</p><p>This is basically less wrong. You know, some of the conversations from Eliezer Yudkowsy or a lot of the AI safety conversations being projected through sort of like the instrument of a CA draft bill. So maybe, I guess, we should kind of get to the heart of it.</p><p>Martin, you might have seen this document that went around a few weeks ago from Leopold Asherberger, I might butcher his name, this ex-OpenAI economist slash, I think, analyst. Very, very smart guy. I met him.</p><p>And he wrote this doc called <a href="https://situational-awareness.ai">Situational Awareness</a>, right? By the way, if you folks haven't read it, it's an interesting read. But I want to kind of point to one thing.</p><p>In that doc, right at the beginning, he basically draws a line of GPT 1, 2, 3, 4, 4.5, 4.0 across time frame and capability. And he basically says that, hey, do you expect this line to stop? And I guess the question is, if this line of complexity and performance and capability of these models keep improving, one, do you agree with that?</p><p>Do you agree with the line is going to keep going? And the second part, if you do agree with that, does it not be, hey, we may not understand how these models totally work. And in the off chance that they do something bad, let's try and put some effort towards stopping it.</p><p>Maybe it's a two-part question. One, does the line keep going? Second, the line keeps going.</p><p>They're getting more complicated, more capable. Maybe let's play it safe.</p><p><strong>37:20 - Non-system people viewing systems - computer systems are not parametric but sigmoidal.</strong></p><p>[Martin Casado] (36:17 - 39:47)</p><p>I've got a precursor answer than the answer. The precursor answer is, don't you find it weird that the people that are most articulate on dimmer scenarios decided to go work on this stuff? It's like the dissonance.</p><p>I actually don't think I've ever. I think it's kind of unique to AI. I've never understood this.</p><p>Like, oh, this stuff is terrible. I'm going to go work for the organization that's bringing it to life. I mean, he literally worked at OpenAI that pushed the frontier.</p><p>So anybody's culpable. He's culpable. And it's kind of like, I mean, and by the way, this is throughout the industry.</p><p>The people that are the most against it are literally the ones that are investing in the organizations that are bringing it to life. Like, I mean, like very materially. So for one, I just have a hard time with kind of this very weird dissonance.</p><p>If you believe that, don't go work on the number one organization that actually caused this. The second one is, listen, I think that this is what happens when non-systems computer scientists kind of view systems. And it's a very economist view.</p><p>And it's like macroeconomics, where you believe the world is parametric and the world is not parametric. And having worked in computer systems for 30 years, I remember all of these numbers when it came to like whatever, like simulation, et cetera. It just turns out most systems are sigmoidal.</p><p>So what does the word parametric mean? Parametric just means like it follows some well-defined function. Like it just goes up to the right forever.</p><p>Or maybe it follows a sigmoid. And a sigmoid just means that it tapers off in asymptotes, right? That's all it means.</p><p>So what I say, a lot of like, especially economists, like there's, I think like Leopold's, you know, it's like exactly what an economist would say, you know, which is like, oh, like, well, this line must go forever because we like these great economic models. And that's just not how like computer systems, you know, tend to work. And they tend not to be parametric in general, by the way.</p><p>They tend to like, you know, like, you know, they'll asymptote off and then go up on new fixes, et cetera. And so there's, you know, in the history of computer science, let's just go back to this. In the history, you could basically take any technology.</p><p>You can take bit rate, you can take CPU cycles, you could take memory, and you could say in the early days that this thing is going to go forever. And it always asymptotes 100% of the time, like 100% of the time. It's never been the case that it's just, you know, the only thing I know of that like continues to grow exponentially is life, because we take energy from the sun and we replicate, right?</p><p>And that's like, that's a property of humans. That's not a property of computer systems. Computer systems are very, very different.</p><p>The rational view for people that have been working, that are not economists, that have been working in computer systems for decades, like myself and many others, is that no, like this is going to asymptote like it always does. Even some of the doomers, like Gary Marcus, who like, listen, I can't interact with him on Twitter, but like he got this one right, is like, listen, you know, we're going to run out of data and then these things are going to slow down and this is going to happen. And this seems to be happening, like these things are slowing down.</p><p>So the rebuttal to that is, is systems tend to slow down. They need more advances. We have no indication that these things are getting more dangerous at all.</p><p>We've been with them for five years. We haven't seen any kind of increase in that. They do seem to be slowing down.</p><p>And so let's kind of take a look at what we believe the long-term trajectories are taking into account the actual systems that underlie them, because these platonic exercises that just like are graphs on an economist's paper, don't tell us how the future unfolds. They just don't.</p><p>[Sriram Krishnan] (39:48 - 41:22)</p><p>I think, you know, if I can kind of like summarize this, because, you know, we've all been having millions of these arguments in different forums. I think this, what you just said is the fundamental schism in thought, whereas, you know, people like you and many, many others and me and a lot of folks believe like, look, A, you know, we're going to asymptote, right? Maybe it's not GPT 4.5, but soon and a lot of, we can get into the data wall and other things as to why. But second, more importantly, no one has been able to demonstrate harm yet, right? Like, you know, I always tell people this challenge of, hey, you know, show me something that any model can do, SONET, FORO, whatever that a college student can't do on Google. So unless you've seen harm, you know, why should we try and, you know, put this regulation in place because it doesn't be very harmful.</p><p>I would say the other, you know, opposing school of thought would basically, their view would be these things are getting more capable. Maybe they're slowing down and let's play it safe, right? Let's, you know, let's just kind of protect ourselves against the unknown.</p><p>Because, you know, as I don't want to say who, but one of the people you mentioned would say this, we might be creating something which could be smarter than us. Maybe we're not. But in the, say, the one or two percent probable chance we are, maybe we should try and take precautions.</p><p>Now, by the way, I've had n versions of this debate on both sides. I think, you know, both sides are very, very dug in. But I think that in some ways is the fundamental schism, you know, at the heart of all of this.</p><p><strong>41:30 - AI is the ultimate &#8216;Deus ex machina&#8217; (God in the machine)</strong></p><p>[Martin Casado] (41:22 - 42:02)</p><p>So here's what's very hard with this discussion. I'm going to make a meta point relative to this, which is AI has become the ultimate Deus Ex Machina. Deus Ex Machina just means the god in the machine, right?</p><p>It's this thing that you can, it's this unspecified force that you can kind of move around as the argument fits. And so this discussion is actually very much a moving target. So, for example, there's all of this concern that there's actually emerging, emergent reasoning in LLMs, right?</p><p>This has been this kind of long concern. I think that's largely been debunked at this point.</p><p>[Sriram Krishnan] (42:03 - 42:11)</p><p>Is that, I mean, I'm not sure that will be brought, when we publish this episode and we make this statement, I'm not sure there'll be consensus that it has been debunked.</p><p>[Martin Casado] (42:11 - 13:14)</p><p>OK, OK, OK, that's fine. There's not strong evidence as to the case, and there's a lot of evidence that this is not the case.</p><p>[Sriram Krishnan] (42:16 - 42:44)</p><p>Fair enough. For example, I'll give you an example, two cash responses. For example, if you listen to Dwarkesh Patel's podcast, he had Shulman, he had a few folks from Gemini and Anthropic.</p><p>They would point to, for example, the Othello paper and a couple of other papers, which may point to certain reasoning capability, right? So anyway, I would say this is not a settled topic.</p><p>[Martin Casado] (42:45 - 42:57)</p><p>Sure, for sure, for sure. That's fair enough. There have been claims on certain aspects of LLMs, like in context learning and grokking, which they claim were generality, which were specifically disproven.</p><p>[Sriram Krishnan] (42:57 - 42:57)</p><p>Yeah.</p><p>[Martin Casado] (42:58 - 43:47)</p><p>My meta point is, is that this is a game of whack-a-mole, just like the intelligent design debates, which is like, you can continue to disprove that these things are dangerous, the other side will continue to come up with reasons why it's not. And so it's just this infinite game because AI and AGI is so unspecified. So I actually think the biggest problem is that there's a moving target on what these things are capable and not capable.</p><p>And I just feel it's exhausting. But what I will say is, for many systems, computer scientists, not economists, not physicists, many of these arguments just don't make sense. They're information theoretically infeasible.</p><p>And they also just are not how systems tend to converge over time. And I think this is going to bear out. And then we're all going to look pretty silly having wasted all this time.</p><p>[Sriram Krishnan] (43:48 - 45:56)</p><p>Yeah, I think there's something here. And I figured out, because of my own commentary, is that, did you watch Game of Thrones, Martin? Do you ever watch Game of Thrones, all the seasons?</p><p>I did not. Oh, my goodness. So disappointing.</p><p><strong>44:15 - Sriram compares AI regulations to Game of Thrones episode</strong></p><p>But OK, so for those of you who watch Game of Thrones, there's this famous scene or famous sequence in one of the later seasons where Cersei&#8212;spoiler alert&#8212;basically frees the High Sparrow and gets these priests out, hoping that they're going to help her take her power. But ultimately, they wind up seizing all power and they turn on her. And I use this scene a lot.</p><p>This is, by the way, the famous kind of shame, shame, shame meme scene that comes from this sequence. But the meta point would be, when you set regulation in motion, it's hard to predict where it often winds up. And often my rebuttal to people who are like, hey, this is lightweight regulation is like, hey, you don't actually know where this winds up.</p><p>The folks who thought about GDPR had good intentions. They were like, hey, let's stop big social media companies from abusing privacy. Very good intentions.</p><p>But what they actually wound up doing is anybody who ever goes to Europe is clicking on a bunch of cookie accept decline buttons nobody ever reads. And in some ways, you actually help the big social media companies because it turns out they are the only ones who have the resources and lawyers to go in and helping. So I think there is a lot of naivety around what the downhill, the slippery slope version of this.</p><p>I'll give you one example. The bill&#8212;sorry, I'm kind of taking over your role here&#8212;but the bill actually talks about submitting safety plans. Like the idea that you go submit these safety plans.</p><p>Now, on the face of it, they look pretty simple. Why shouldn't everyone have a safety plan? But you quickly realize that this can be very easily weaponized.</p><p>And a year or two down the line, somebody points and says, hey, look, all these open source models are not putting in as much work as these other guys. So these guys are, by definition, are class unsafe because of the documents they themselves submitted to us. So which is why some of these arguments on the surface, which are like, it seems like lightweight regulation, if you actually think about it and think about what happens in step two, three, a year or two down the line, actually to some really, really bad consequences.</p><p><strong>46:00 - Anthropic&#8217;s investment in AI safety</strong></p><p>[Martin Casado] (45:57 - 46:28)</p><p>Yeah. Just a very specific thing, because you brought up Anthropic previously. So for every voice in support of this bill, for which there's not very many, there's probably tenant opposition in the same sector and many sectors that are not represented that are not for the bill.</p><p>The two prime labs and LLMs have taken positions on this. So OpenAI against it. And Anthropic has this very lukewarm thing where they issued a response, but in their response, they're like, we're not sure.</p><p>So it's like, not really an endorsement, but kind of an endorsement.</p><p>[Sriram Krishnan] (46:29 - 46:48)</p><p>I want to highlight what he just said. So OpenAI wrote a letter, which came out the day before we were recording this, which basically said, we think this bill is a bad idea. It's going to hurt innovation.</p><p>It's going to hurt national security. This should be done at a federal level. Anthropic wrote a letter, which is kind of, we read in multiple ways, but they definitely had strong issues with it.</p><p>And this is like a week after Pelosi's statement.</p><p>[Martin Casado] (46:48 - 47:57)</p><p>And Anthropic in particular said, listen, we think on balance, it's probably a positive, but we're not sure. It's probably the most noncommittal letter. It probably cancels itself out.</p><p>But even in the Anthropic response, it's very interesting because they had some proposals, not all that we're taking up, but their proposals basically said, you need to do best effort for safety. Now Anthropic very famously invests a ton in safety. And it's not clear what that means.</p><p>And of course, it'll become up to the courts to determine whether this is actually followed. And so if they're the one that invests the most in safety, clearly they set the standard for like best practices and best effort. And again, this has a very GDPR like consequence for anybody that doesn't have their resources or knowledge to do that would be held liable.</p><p>And this is very explicit liability as held by courts. And so like, again, they were kind of very mealy mouth about their support or lack of support, but even their proposal, you could argue would be bad for innovation. I mean, good for Anthropic, good for Anthropic, but bad for the rest of us.</p><p>[Aarthi Ramamurthy] (47:57 - 48:33)</p><p>Yeah. The way it went was good. Yeah.</p><p>I think that's a great point. I think earlier you had said, this is not just for like really large companies or startups. This is like, you're thinking like Series B or Series C, you're going to get there.</p><p><strong>48:15 - If you&#8217;re a AI founder, what can you do about this bill today?</strong></p><p>If you're doing things right, you're going to get there pretty quickly. So I guess question for you is, if you're a founder, what would you do in the context of what's going on right now with this bill? Like, what can you do?</p><p>I think most people, most founders I'm sure until this point are like, well, it doesn't really impact me. My startup's too early. It doesn't really matter.</p><p>Let somebody else fight this fight. So what do you think people can do today?</p><p>[Martin Casado] (48:34 - 49:24)</p><p>Yeah. So I'm so glad that you're asking the call to action, but I'm going to give you a very depressing response, which is as far as I know, 100% of founders I've spoken to are against this bill. 100%.</p><p>Everybody understands the impact. Everybody understands that open source will be limited and innovation is bad. Many have spoken up and Scott Wiener is simply not listening to them.</p><p>And so you have basically a rogue politician who has spoken directly to many of the experts and founders. Like we're talking people like, of course, Amjad and Feifei and Andrew and Ian and has ignored them. So I would say we do have to speak up.</p><p>You do have to make your voices heard. But many of us are very discouraged because it's fallen on 100% deaf ears in the face of out-of-state support from people not even accountable to the bill. So it's a very frustrating situation.</p><p>And I'm sorry to be giving that answer, but that's just, you know, the state of play today.</p><p>[Sriram Krishnan] (49:25 - 50:16)</p><p>I do think, you know, one thing which has been heartening is to see senior leaders in the kind of the government apparatus like Pelosi herself and Roka, and I was actually a former guest on her show, all speak up against this. So you're basically, you're seeing politicians connected to Silicon Valley actually try and say, hey, let's hold off. This could really cause harm.</p><p>I want to ask you a bit of a personal question because I know Martin, I work with him and Martin is kind of the nicest person around. So calm, you know, calms me down. I call him and I'm kind of like angry with something and he calms me down.</p><p><strong>50:00 - Why is this bill a personal issue for Martin?</strong></p><p>And you've been really, really worked up about this, right? And when you, with you on Twitter, on other places, this feels so personal to you. Why is it so personal to you?</p><p>And also, what have you kind of learned in trying to kind of fight this fight in public?</p><p>[Martin Casado] (50:16 - 52:25)</p><p>Yeah, so listen, I'm a moderate centrist Democrat. I've been all my life. I'm a moderate dude.</p><p>I don't like politics. I don't get involved. I've never spoken up publicly about politics that I can recall.</p><p>Since about maybe 25 years ago when the same thing was playing out. So we've been through this before. We did this with, you know, cryptography.</p><p>We did this with the internet. We did this with digital rights and DRM. And I was in college, an undergrad during that time.</p><p>And it was the broad consensus that this is very bad for my industry, which I loved, which is computer science, right? Like we were going to choke off software. We're going to choke off innovation.</p><p>And a lot of this was driven for the wrong reasons. And so I got very active then 25 years ago. And then I think since then, we laid a great foundation, which has been adhered to.</p><p>So the reason I've kind of had to come, you know, you know, put back the old battle gear on and like come out of retirement and like go back in the field is like, again, we're at this point where we've forgotten all the lessons that we learned, that there's things that are very dangerous to the discipline that I love. I really love computer science. I really do.</p><p>I really love software. And what's being pushed is very, very dangerous for the industry. And unfortunately, and this is the discouraging thing, it basically is a political gambit by one person.</p><p>I mean, it's not even based on, you know, some understanding of the issues or some first principles thinking or any of that. And so, yeah, so hopefully this, you know, ends up going away and senior voices take over like, you know, Zoe Lofgren has been fantastic and Ro Khanna has been fantastic. I agree with her comments.</p><p>Pelosi's been fantastic. Like the majority of the Democrats, I very much agree. And once like things normalize, I can go back to, you know, what I like to focus on, which is investing and building.</p><p>But until then, I think I feel like I need to do my duty. And many of us, I think I would call the call to action is for those of you that are not participating, I think this is the time in your career to do it because this matters. It really matters this time.</p><p>And then, you know, like once, you know, things get back on track, we can kind of go back and focus on other things.</p><p>[Sriram Krishnan] (52:25 - 53:21)</p><p>This whole thing has been kind of a crazy, bizarre experience. And one of the lessons for me is how one sort of minor extremist school of thought can suddenly just because you get one person who's in a position of power almost suddenly become law, right? And then you almost have to rally the Avengers and get all the, you know, the folks who actually understand what they're saying.</p><p>But you're kind of fighting this rearguard battle because it comes out of nowhere and it's a draft bill. And then you're trying to kind of align people. So it tells you like, you know, how quickly these things happen.</p><p>And I'll agree with Martin that this is essential. If you are dealing with AI, if you're building on top of open source AI, and trust me, a lot of you are, you know, sometimes you don't know it, right? Like this really matters.</p><p>So, you know, go read up, you know, go follow Martin, you know, go follow our partner Anjane, but not just folks at A16Z, right? Not just the tech bro billionaires that we represent. But Marty, by the way, I just want to go with...</p><p>I am not a billionaire. I want to be fair. I don't know.</p><p>I think you might be a billionaire.</p><p>[Martin Casado] (53:21 - 53:37)</p><p>Can I be very clear about a few things? Nobody has told me to do this. Like Mark and Ben weren't like, Martin, nobody told me to do this.</p><p>I'm definitely not a billionaire. I'm definitely not like funding people to do this. This is 100% a personal issue that I feel very passionately about.</p><p>Are you not a billionaire?</p><p>[Sriram Krishnan] (53:37 - 54:31)</p><p>Okay, maybe not for a bit. But all right. Martin, you know, I'll say this.</p><p>Martin so personally cares about this. And so do a lot of others. And I think a lot of us are just doing it because we passionately believe in this stuff, right?</p><p>And it's not just people at the front. Like you see so many others, you know, so many other voices in there. You mentioned a few like Fei-Fei Li, who is really the godmother of this space in so many different ways.</p><p>She had this great op-ed come out a couple of weeks ago. And I think, you know, we need all the voices we can get. But Martin, you know, this is a fantastic conversation.</p><p>I think, you know, we should, you know, thank you so much. And, you know, look, we touched on so many things, but, you know, we always like a follow-up conversation on all the safety existential risk stuff, because this bill is one manifestation of this deeper argument. But thank you so much for everything you do.</p><p>And, you know, and this is such a blast.</p><p>[Martin Casado] (54:31 - 54:33)</p><p>Yeah, I really enjoyed it. Thank you so much.</p><p>[Sriram Krishnan] (54:33 - 54:35)</p><p>Thanks, folks. Thank you.</p>]]></content:encoded></item><item><title><![CDATA[EP 84: Mike Maples on pattern breakers, startup insights and the Valley vibe shift ]]></title><description><![CDATA[Mike Maples is a legend in SV VC circles.]]></description><link>https://www.aarthiandsriram.com/p/ep-84-mike-maples-on-pattern-breakers</link><guid isPermaLink="false">https://www.aarthiandsriram.com/p/ep-84-mike-maples-on-pattern-breakers</guid><dc:creator><![CDATA[The Aarthi and Sriram Show]]></dc:creator><pubDate>Mon, 26 Aug 2024 04:33:40 GMT</pubDate><content:encoded><![CDATA[<p></p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;8883b228-e2b2-472c-b041-8bae3eb20ee4&quot;,&quot;duration&quot;:null}"></div><p>Mike Maples is a legend in SV VC circles. As a part of SGI and then a founding partner at Floodgate ventures. He has invested in companies such as Lyft and Twitch. More than all of that, Mike is one of the clearest thinkers in tech and a dear friend. In this episode, we discuss his new book that summarizes his work on analyzing patterns around what makes startups work. We also dicuss how AI fits into this and the Valley&#8217;s recent vibe shift towards free speech.<br></p><p><a href="https://www.youtube.com/watch?v=RDmRlI8um4E&amp;t=199s">3:19</a> The genesis of Mike&#8217;s book: Pattern Breakers<br><a href="https://www.youtube.com/watch?v=RDmRlI8um4E&amp;t=455s">7:35</a> Patterns of successful startups<br><a href="https://www.youtube.com/watch?v=RDmRlI8um4E&amp;t=735s">12:15</a> Common thread between great founders<br><a href="https://www.youtube.com/watch?v=RDmRlI8um4E&amp;t=878s">14:38</a> Elon Musk&#8217;s Cybertruck: Mision vs Practicality<br><a href="https://www.youtube.com/watch?v=RDmRlI8um4E&amp;t=962s">16:02</a> BEST startup insight: Don&#8217;t try to think of a startup<br><a href="https://www.youtube.com/watch?v=RDmRlI8um4E&amp;t=1390s">23:10</a> LLM startups as edge cases<br><a href="https://www.youtube.com/watch?v=RDmRlI8um4E&amp;t=1608s">26:48</a> AI is a sea change<br><a href="https://www.youtube.com/watch?v=RDmRlI8um4E&amp;t=1811s">30:11</a> Timing is the hardest thing to get right<br><a href="https://www.youtube.com/watch?v=RDmRlI8um4E&amp;t=2042s">34:02</a> Silicon Valley&#8217;s political shift<br><a href="https://www.youtube.com/watch?v=RDmRlI8um4E&amp;t=2678s">44:38</a> Meta can&#8217;t win with either political side<br><a href="https://www.youtube.com/watch?v=RDmRlI8um4E&amp;t=2776s">46:16</a> Advice for fund managers<br><a href="https://www.youtube.com/watch?v=RDmRlI8um4E&amp;t=3089s">51:29</a> Advice for founders</p>]]></content:encoded></item><item><title><![CDATA[EP 82 Kyla Scanlon and the vibe-cession]]></title><description><![CDATA[We have become fans of Kyla Scanlon over the last year for her videos explaining the economy and &#8220;what is going on&#8221;, especially in a way that connects to young people.]]></description><link>https://www.aarthiandsriram.com/p/ep-82-kyla-scanlon-and-the-vibe-cession</link><guid isPermaLink="false">https://www.aarthiandsriram.com/p/ep-82-kyla-scanlon-and-the-vibe-cession</guid><dc:creator><![CDATA[The Aarthi and Sriram Show]]></dc:creator><pubDate>Mon, 26 Aug 2024 04:21:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/qjdL5rMBdVc" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div id="youtube2-qjdL5rMBdVc" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;qjdL5rMBdVc&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/qjdL5rMBdVc?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><br><br>We have become fans of <a href="https://kylascanlon.com/">Kyla Scanlon</a> over the last year for her videos explaining the economy and &#8220;what is going on&#8221;, especially in a way that connects to young people. She has a new book <a href="https://www.penguinrandomhouse.com/books/737854/in-this-economy-by-kyla-scanlon/">&#8220;In this Economy&#8221;</a> that we *highly* recommend. Kyla is a rare personality that can both deeply understand what&#8217;s going on with the economy and can then communicate it in a way that really connects. This was a blast.<br><br>PSA: We had some internet difficulties so had to switch what we use to record.</p><p><a href="https://www.youtube.com/watch?v=qjdL5rMBdVc&amp;t=134s">2:14</a> How would Kyla fix the U.S. economy?<br><a href="https://www.youtube.com/watch?v=qjdL5rMBdVc&amp;t=245s">4:05</a> Why home ownership might NOT be the best way to build wealth<br><a href="https://www.youtube.com/watch?v=qjdL5rMBdVc&amp;t=626s">10:26</a> Leadership gap left by Steve Jobs at major tech companies<br><a href="https://www.youtube.com/watch?v=qjdL5rMBdVc&amp;t=791s">13:11</a> Why Kyla wrote a book<br><a href="https://www.youtube.com/watch?v=qjdL5rMBdVc&amp;t=1053s">17:33</a> What people get wrong about Gen Z<br><a href="https://www.youtube.com/watch?v=qjdL5rMBdVc&amp;t=1375s">22:55</a> The difficulties of measuring the economy<br><a href="https://www.youtube.com/watch?v=qjdL5rMBdVc&amp;t=1675s">27:55</a> Living in a post-truth society<br><a href="https://www.youtube.com/watch?v=qjdL5rMBdVc&amp;t=1803s">30:03</a> The end of monoculture<br><a href="https://www.youtube.com/watch?v=qjdL5rMBdVc&amp;t=2040s">34:00</a> Kyla&#8217;s video-creation strategy<br><a href="https://www.youtube.com/watch?v=qjdL5rMBdVc&amp;t=2133s">35:33</a> Current state of social media platforms<br><a href="https://www.youtube.com/watch?v=qjdL5rMBdVc&amp;t=2324s">38:44</a> Kyla&#8217;s bold 2024 prediction</p>]]></content:encoded></item><item><title><![CDATA[EP 81 How to close a deal]]></title><description><![CDATA[We often get asked about what we have learned in our careers that we wish we knew earlier.]]></description><link>https://www.aarthiandsriram.com/p/ep-81-how-to-close-a-deal</link><guid isPermaLink="false">https://www.aarthiandsriram.com/p/ep-81-how-to-close-a-deal</guid><dc:creator><![CDATA[The Aarthi and Sriram Show]]></dc:creator><pubDate>Mon, 26 Aug 2024 04:20:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/buL9JicK_sg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We often get asked about what we have learned in our careers that we wish we knew earlier. One skill I wish I knew earlier but *didn&#8217;t know I lacked* was the art of making a deal happen. From closing a job offer to negotiating a house, I see some mechanics come up in everything we do.<br><br>In this episode we talk about deals we have worked on and what we have learned across them.<br><br>PSA: we botched the audio on this epsiode and tried to do the best we can with the audio we recovered. Apologies in advance!</p><p></p><p>Youtube &#8594; </p><div id="youtube2-buL9JicK_sg" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;buL9JicK_sg&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/buL9JicK_sg?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><br><br><a href="https://www.youtube.com/watch?v=buL9JicK_sg&amp;t=51s">0:51</a> Introduction<br><a href="https://www.youtube.com/watch?v=buL9JicK_sg&amp;t=220s">3:40</a> Deals are long-term relationships<br><a href="https://www.youtube.com/watch?v=buL9JicK_sg&amp;t=362s">6:02</a> Facebook-WhatsApp acquisition story<br><a href="https://www.youtube.com/watch?v=buL9JicK_sg&amp;t=724s">12:04</a> The most important part of deal-making<br><a href="https://www.youtube.com/watch?v=buL9JicK_sg&amp;t=1004s">16:44</a> Don&#8217;t come off as desperate<br><a href="https://www.youtube.com/watch?v=buL9JicK_sg&amp;t=1175s">19:35</a> Maintaining momentum during a deal<br><a href="https://www.youtube.com/watch?v=buL9JicK_sg&amp;t=1553s">25:53</a> Know everything about the person you&#8217;re dealing with<br><a href="https://www.youtube.com/watch?v=buL9JicK_sg&amp;t=2017s">33:37</a> How we closed our London home<br><a href="https://www.youtube.com/watch?v=buL9JicK_sg&amp;t=2274s">37:54</a> Mistakes we often see<br><a href="https://www.youtube.com/watch?v=buL9JicK_sg&amp;t=2680s">44:40</a> Managing emotions while deal-making<br><a href="https://www.youtube.com/watch?v=buL9JicK_sg&amp;t=3019s">50:19</a> What to do if the deal falls through</p>]]></content:encoded></item><item><title><![CDATA[EP 80 How to hire great people]]></title><description><![CDATA[Over our careers we have been lucky to have been able to hire a lot of great people - from engineers to executives.]]></description><link>https://www.aarthiandsriram.com/p/ep-80-how-to-hire-great-people</link><guid isPermaLink="false">https://www.aarthiandsriram.com/p/ep-80-how-to-hire-great-people</guid><dc:creator><![CDATA[The Aarthi and Sriram Show]]></dc:creator><pubDate>Sun, 09 Jun 2024 21:54:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/HH7mwR2MyDk" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Over our careers we have been lucky to have been able to hire a lot of great people - from engineers to executives. Over that time, we have also seen the right and wrong ways to approach hiring (and made many mistakes ourselves).</p><p>In this episode we try and break down what we have learned over the years and what you can do if you want to find, interview and close that amazing hire.</p><div id="youtube2-HH7mwR2MyDk" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;HH7mwR2MyDk&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/HH7mwR2MyDk?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8a76411da1175a17c4a7a057d9&quot;,&quot;title&quot;:&quot;EP 80 - How Sriram And Aarthi Hire The Right People&quot;,&quot;subtitle&quot;:&quot;Aarthi and Sriram&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/2VBlULPDG9spR2iP98XfJ2&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/2VBlULPDG9spR2iP98XfJ2" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/the-aarthi-and-sriram-show/id1624345213?i=1000658275999&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000658275999.jpg&quot;,&quot;title&quot;:&quot;EP 80 - How Sriram And Aarthi Hire The Right People&quot;,&quot;podcastTitle&quot;:&quot;The Aarthi and Sriram Show&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:2901000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/ep-80-how-sriram-and-aarthi-hire-the-right-people/id1624345213?i=1000658275999&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2024-06-08T18:39:33Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/the-aarthi-and-sriram-show/id1624345213?i=1000658275999" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p>0:00 Intro<br>0:56 Prioritize hiring yourself<br>5:37 Always keep looking for talent <br>8:10 Align incentives with your hire <br>15:05 How to find the best hires<br>19:15 How to run a good loop<br>24:02 Sriram&#8217;s hiring loop horror story <br>29:51 Reference checks are underrated<br>37:25 How to close the hire<br>39:04 How Twitter closed Sriram<br>41:43 Aarthi&#8217;s Netflix recruitment<br>45:45 Conclusion</p>]]></content:encoded></item><item><title><![CDATA[EP 79: the state of AI with Naveen Rao of Databricks.]]></title><description><![CDATA[We wanted to have a deep dive into LLMs and AI as they exist in May 2024 and were lucky to get one of the pre-eminent experts in the space - founder of MosaicML and VP of gen AI at Databricks - Naveen Rao.]]></description><link>https://www.aarthiandsriram.com/p/ep-79-the-state-of-ai-with-naveen</link><guid isPermaLink="false">https://www.aarthiandsriram.com/p/ep-79-the-state-of-ai-with-naveen</guid><dc:creator><![CDATA[The Aarthi and Sriram Show]]></dc:creator><pubDate>Wed, 22 May 2024 21:03:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!-QCp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fpodcast-episode_1000656421098.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We wanted to have a deep dive into LLMs and AI as they exist in May 2024 and were lucky to get one of the pre-eminent experts in the space - founder of MosaicML and VP of gen AI at Databricks - Naveen Rao.  In this we cover:</p><p>- the state of AI in May 2024<br>- reactions to gpt4o and the Google IO Gemini launches<br>- Small models vs large models<br>- open source models - the state of the art, biz models for open source<br>- are transformers the end state of LLM architecture<br>- Agentic behavior<br>- How do we get more data for training?<br><br>And much more! </p><p></p><p>Apple: </p><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/the-aarthi-and-sriram-show/id1624345213?i=1000656421098&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000656421098.jpg&quot;,&quot;title&quot;:&quot;EP 79 - The State of AI: GPT-4, Google I/O, Generative AI Startups with Naveen Rao, VP of Generative AI at Databricks&quot;,&quot;podcastTitle&quot;:&quot;The Aarthi and Sriram Show&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:3237000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/ep-79-the-state-of-ai-gpt-4-google-i-o-generative/id1624345213?i=1000656421098&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2024-05-22T17:33:01Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/the-aarthi-and-sriram-show/id1624345213?i=1000656421098" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p>Spotify: </p><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8a76411da1175a17c4a7a057d9&quot;,&quot;title&quot;:&quot;EP 79 - The State of AI: GPT-4, Google I/O, Generative AI Startups with Naveen Rao, VP of Generative AI at Databricks&quot;,&quot;subtitle&quot;:&quot;Aarthi and Sriram&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/1zRT4zkDX9DDMwUxGiAh5k&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/1zRT4zkDX9DDMwUxGiAh5k" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><div id="youtube2-6i9O_hU3240" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;6i9O_hU3240&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/6i9O_hU3240?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p><p><a href="https://www.youtube.com/watch?v=6i9O_hU3240&amp;t=332s">5:32</a> What do enterprises think of AI<br><a href="https://www.youtube.com/watch?v=6i9O_hU3240&amp;t=480s">8:00</a> ChatGPT-4o and Google I/O reactions<br> <a href="https://www.youtube.com/watch?v=6i9O_hU3240&amp;t=578s">9:38</a> Has generative AI scaling hit a wall? <br><a href="https://www.youtube.com/watch?v=6i9O_hU3240&amp;t=994s">16:34</a> AI pretraining difficulties<br> <a href="https://www.youtube.com/watch?v=6i9O_hU3240&amp;t=1369s">22:49</a> General AI vs. specialized AI<br><a href="https://www.youtube.com/watch?v=6i9O_hU3240&amp;t=1555s">25:55</a> Biggest opportunities for generative AI startups<br><a href="https://www.youtube.com/watch?v=6i9O_hU3240&amp;t=1755s">29:15</a> AI hardware landscape<br><a href="https://www.youtube.com/watch?v=6i9O_hU3240&amp;t=2003s">33:23</a> How open source AI companies can compete with closed models like OpenAI? <a href="https://www.youtube.com/watch?v=6i9O_hU3240&amp;t=2339s">38:59</a> Why Naveen doesn&#8217;t use AI in his home <br><a href="https://www.youtube.com/watch?v=6i9O_hU3240&amp;t=2422s">40:22</a> Is open-sourcing AI dangerous?<br><a href="https://www.youtube.com/watch?v=6i9O_hU3240&amp;t=2626s">43:46</a> Advice for founders<br><a href="https://www.youtube.com/watch?v=6i9O_hU3240&amp;t=2820s">47:00</a> Naveen sold to Intel too early<br><a href="https://www.youtube.com/watch?v=6i9O_hU3240&amp;t=3069s">51:09</a> The pace of AI innovation</p>]]></content:encoded></item><item><title><![CDATA[EP 78: (Solo Ep) Career advice in your 20s, the mythical work life balance and betting on yourself]]></title><description><![CDATA[We decided to do a grab-bag of topics we get asked a lot around: what advice would we give your younger selves, how to pick jobs in your 20s, how to take more bets on yourself, &#8220;work life balance&#8221; and much more.]]></description><link>https://www.aarthiandsriram.com/p/ep-78-solo-ep-career-advice-in-your</link><guid isPermaLink="false">https://www.aarthiandsriram.com/p/ep-78-solo-ep-career-advice-in-your</guid><dc:creator><![CDATA[The Aarthi and Sriram Show]]></dc:creator><pubDate>Sun, 05 May 2024 17:46:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/9EYaytYBd1Y" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We decided to do a grab-bag of topics we get asked a lot around: what advice would we give your younger selves, how to pick jobs in your 20s, how to take more bets on yourself, &#8220;work life balance&#8221; and much more.</p><p>Youtube: <br></p><div id="youtube2-9EYaytYBd1Y" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;9EYaytYBd1Y&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/9EYaytYBd1Y?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Apple: </p><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/the-aarthi-and-sriram-show/id1624345213?i=1000654588889&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000654588889.jpg&quot;,&quot;title&quot;:&quot;EP 78 - Sriram &amp; Aarthi&#8217;s Job Advice, Work-Life Balance, Betting On Yourself&quot;,&quot;podcastTitle&quot;:&quot;The Aarthi and Sriram Show&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:3771000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/ep-78-sriram-aarthis-job-advice-work-life-balance-betting/id1624345213?i=1000654588889&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2024-05-05T16:28:48Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/the-aarthi-and-sriram-show/id1624345213?i=1000654588889" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p>Spotify: </p><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8a76411da1175a17c4a7a057d9&quot;,&quot;title&quot;:&quot;EP 78 - Sriram &amp; Aarthi&#8217;s Job Advice, Work-Life Balance, Betting On Yourself&quot;,&quot;subtitle&quot;:&quot;Aarthi and Sriram&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/6lckaBWflXU603Gnx8ZVed&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/6lckaBWflXU603Gnx8ZVed" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><p><a href="https://www.youtube.com/watch?v=9EYaytYBd1Y&amp;t=141s">2:21</a> Advice for our younger selves<br><a href="https://www.youtube.com/watch?v=9EYaytYBd1Y&amp;t=800s">13:20</a> Betting on yourself <br><a href="https://www.youtube.com/watch?v=9EYaytYBd1Y&amp;t=1399s">23:19</a> Benefits of taking more shots<br><a href="https://www.youtube.com/watch?v=9EYaytYBd1Y&amp;t=1739s">28:59</a> Work-life balance <br><a href="https://www.youtube.com/watch?v=9EYaytYBd1Y&amp;t=2012s">33:32</a> Aarthi&#8217;s hot take<br><a href="https://www.youtube.com/watch?v=9EYaytYBd1Y&amp;t=2145s">35:45</a> Time to switch jobs?<br><a href="https://www.youtube.com/watch?v=9EYaytYBd1Y&amp;t=2841s">47:21</a> Unpopular opinions: titles</p>]]></content:encoded></item><item><title><![CDATA[EP 77: Heathrow, Wrestlemania 40, Llama 3 and more]]></title><description><![CDATA[We did another solo episode this week.]]></description><link>https://www.aarthiandsriram.com/p/ep-77-heathrow-skilled-immigration</link><guid isPermaLink="false">https://www.aarthiandsriram.com/p/ep-77-heathrow-skilled-immigration</guid><dc:creator><![CDATA[The Aarthi and Sriram Show]]></dc:creator><pubDate>Sun, 28 Apr 2024 17:57:18 GMT</pubDate><content:encoded><![CDATA[<p></p><p>We did another solo episode this week.  We talked about our Heathrow presence. And then Sriram&#8217;s favorite part: recapping Wrestlemania 40. Llama3, Zuck&#8217;s new look, there&#8217;s a lot in here!</p><p></p><p><a href="https://www.youtube.com/watch?v=ASy60DFfHvY&amp;t=0s">0:00</a> We&#8217;re on a billboard<br><a href="https://www.youtube.com/watch?v=ASy60DFfHvY&amp;t=158s">2:38</a> Our podcast journey<br><a href="https://www.youtube.com/watch?v=ASy60DFfHvY&amp;t=1741s">29:01</a> Recapping WrestleMania 40<br><a href="https://www.youtube.com/watch?v=ASy60DFfHvY&amp;t=2961s">49:21</a> Mark Zuckerberg&#8217;s personal rebrand<br><a href="https://www.youtube.com/watch?v=ASy60DFfHvY&amp;t=3240s">54:00</a> Meta releases Llama 3</p>]]></content:encoded></item><item><title><![CDATA[EP 75: Vivek Ramaswamy on Tiktok, wokeness, crypto, immigration, Jake Paul vs Tyson and more. ]]></title><description><![CDATA[Whatever you may think of him, Vivek Ramaswamy went from relative unknown in mainstream media and politics to having household name recognition in a very short time.]]></description><link>https://www.aarthiandsriram.com/p/ep-75-vivek-ramaswamy-on-tiktok-wokeness</link><guid isPermaLink="false">https://www.aarthiandsriram.com/p/ep-75-vivek-ramaswamy-on-tiktok-wokeness</guid><dc:creator><![CDATA[The Aarthi and Sriram Show]]></dc:creator><pubDate>Sun, 17 Mar 2024 16:05:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/MOJzMmH8Jsc" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Whatever you may think of him, Vivek Ramaswamy went from relative unknown in mainstream media and politics to having household name recognition in a very short time. In this episode, we get into it all: his take on the recent TikTok &#8220;ban&#8221; bill, legal/skilled immigration (a issue that matters to many in tech), his usage of podcasts and the internet to build an audience and even some fun asides like his prediction on Jake Paul vs Tyson. Vivek isn&#8217;t shy and this won&#8217;t disappoint.</p><p><strong>Youtube:</strong> </p><div id="youtube2-MOJzMmH8Jsc" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;MOJzMmH8Jsc&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/MOJzMmH8Jsc?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><br><strong>Apple: <a href="https://podcasts.apple.com/us/podcast/ep-75-the-vivek-ramaswamy-interview-immigration/id1624345213?i=1000649473366">(link)</a></strong><br><strong>Spotify: (<a href="https://open.spotify.com/episode/62a3TAMQtdmtOA0hWOHhhZ?si=6b666c4db71d44a8">link</a>)</strong></p><p></p><p><a href="https://www.youtube.com/watch?v=MOJzMmH8Jsc&amp;t=85s">1:25</a> Lessons from entrepreneurship &amp; campaigning <br><a href="https://www.youtube.com/watch?v=MOJzMmH8Jsc&amp;t=540s">9:00</a> Disagreements in debates<br><a href="https://www.youtube.com/watch?v=MOJzMmH8Jsc&amp;t=775s">12:55</a> TikTok Ban<br><a href="https://www.youtube.com/watch?v=MOJzMmH8Jsc&amp;t=1231s">20:31</a> Wokeness in social media <br><a href="https://www.youtube.com/watch?v=MOJzMmH8Jsc&amp;t=1460s">24:20</a> Vivek's views on crypto<br><a href="https://www.youtube.com/watch?v=MOJzMmH8Jsc&amp;t=1710s">28:30</a> Podcasting strategy for Vivek's campaign<br><a href="https://www.youtube.com/watch?v=MOJzMmH8Jsc&amp;t=1980s">33:00</a> Legal and illegal immigration<br><a href="https://www.youtube.com/watch?v=MOJzMmH8Jsc&amp;t=2237s">37:17</a> Jake Paul vs Mike Tyson<br><a href="https://www.youtube.com/watch?v=MOJzMmH8Jsc&amp;t=2292s">38:12</a> Vivek signs off in Tamil!</p>]]></content:encoded></item><item><title><![CDATA[EP 74: On Gemini, Sora, our new studio and more]]></title><description><![CDATA[We wanted to take our new podcast studio for a spin and decided to do a grab bag of topics for last week.]]></description><link>https://www.aarthiandsriram.com/p/ep-74-on-gemini-sora-our-new-studio</link><guid isPermaLink="false">https://www.aarthiandsriram.com/p/ep-74-on-gemini-sora-our-new-studio</guid><dc:creator><![CDATA[The Aarthi and Sriram Show]]></dc:creator><pubDate>Tue, 12 Mar 2024 08:07:16 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/FvLkYt4tlpU" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We wanted to take our new podcast studio for a spin and decided to do a grab bag of topics for last week. From Gemini&#8217;s issues to Sora and much more , this was fun.</p><div id="youtube2-FvLkYt4tlpU" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;FvLkYt4tlpU&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/FvLkYt4tlpU?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><a href="https://www.youtube.com/watch?v=FvLkYt4tlpU&amp;t=0s">0:00</a> Welcome to the new studio!<br> <a href="https://www.youtube.com/watch?v=FvLkYt4tlpU&amp;t=253s">4:13</a> Our best AI episodes<br><a href="https://www.youtube.com/watch?v=FvLkYt4tlpU&amp;t=623s">10:23</a> Google Gemini&#8217;s launch controversy <br><a href="https://www.youtube.com/watch?v=FvLkYt4tlpU&amp;t=1035s">17:15</a> How can Google fix Gemini?<br> <a href="https://www.youtube.com/watch?v=FvLkYt4tlpU&amp;t=1255s">20:55</a> What Gemini gets right<br> <a href="https://www.youtube.com/watch?v=FvLkYt4tlpU&amp;t=1571s">26:11</a> The rise of Groq<br> <a href="https://www.youtube.com/watch?v=FvLkYt4tlpU&amp;t=2137s">35:37</a> NVIDIA and Jenson Huang are on top of the world <br><a href="https://www.youtube.com/watch?v=FvLkYt4tlpU&amp;t=2534s">42:14</a> Is Sora going to replace Hollywood?</p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[EP 73: Aravind Srinivas of Perplexity.ai on IIT, culture and building the next gen of AI experiences.]]></title><description><![CDATA[Aravind Srinivas has captured people&#8217;s imagination and attention with Perplexity.ai.]]></description><link>https://www.aarthiandsriram.com/p/ep-73-aravind-srinivas-of-perplexityai</link><guid isPermaLink="false">https://www.aarthiandsriram.com/p/ep-73-aravind-srinivas-of-perplexityai</guid><dc:creator><![CDATA[The Aarthi and Sriram Show]]></dc:creator><pubDate>Thu, 29 Feb 2024 19:20:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/fJLE_gYkvZY" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Aravind Srinivas has captured people&#8217;s imagination and attention with Perplexity.ai. If you&#8217;ve been on X, you have probably seen his viral tweets. But underneath the tweets is one of the more interesting companies and experiences being built on AI right now.</p><p>What I find most interesting about Aravind is his story and he really got into it today - getting into IIT in India, figuring out how to get into computers, being inside OpenAI and Google and the differences in their culture and then his focus on culture now. This was a blast.<br></p><p><a href="https://www.youtube.com/watch?v=fJLE_gYkvZY">Youtube</a> | <a href="https://open.spotify.com/episode/2qdtqZ2zOlilUe7B0J3Prq?si=498928eb37ee4e80">Spotify</a> | <a href="https://podcasts.apple.com/us/podcast/the-aarthi-and-sriram-show/id1624345213?i=1000647529945">Apple</a></p><div id="youtube2-fJLE_gYkvZY" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;fJLE_gYkvZY&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/fJLE_gYkvZY?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div>]]></content:encoded></item><item><title><![CDATA[EP 72: Don't die and be epic - Bryan Johnson]]></title><description><![CDATA[&#8220;Be an epic human being&#8221;]]></description><link>https://www.aarthiandsriram.com/p/ep-72-dont-die-and-be-epic-bryan</link><guid isPermaLink="false">https://www.aarthiandsriram.com/p/ep-72-dont-die-and-be-epic-bryan</guid><dc:creator><![CDATA[The Aarthi and Sriram Show]]></dc:creator><pubDate>Sun, 04 Feb 2024 14:27:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!m7wn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F987d4a72-e551-45b6-8a13-b8d482d45c1a_1846x964.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>&#8220;Be an epic human being&#8221;</em></p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!m7wn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F987d4a72-e551-45b6-8a13-b8d482d45c1a_1846x964.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!m7wn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F987d4a72-e551-45b6-8a13-b8d482d45c1a_1846x964.png 424w, https://substackcdn.com/image/fetch/$s_!m7wn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F987d4a72-e551-45b6-8a13-b8d482d45c1a_1846x964.png 848w, https://substackcdn.com/image/fetch/$s_!m7wn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F987d4a72-e551-45b6-8a13-b8d482d45c1a_1846x964.png 1272w, https://substackcdn.com/image/fetch/$s_!m7wn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F987d4a72-e551-45b6-8a13-b8d482d45c1a_1846x964.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!m7wn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F987d4a72-e551-45b6-8a13-b8d482d45c1a_1846x964.png" width="346" height="180.6043956043956" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/987d4a72-e551-45b6-8a13-b8d482d45c1a_1846x964.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:760,&quot;width&quot;:1456,&quot;resizeWidth&quot;:346,&quot;bytes&quot;:2251534,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!m7wn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F987d4a72-e551-45b6-8a13-b8d482d45c1a_1846x964.png 424w, https://substackcdn.com/image/fetch/$s_!m7wn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F987d4a72-e551-45b6-8a13-b8d482d45c1a_1846x964.png 848w, https://substackcdn.com/image/fetch/$s_!m7wn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F987d4a72-e551-45b6-8a13-b8d482d45c1a_1846x964.png 1272w, https://substackcdn.com/image/fetch/$s_!m7wn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F987d4a72-e551-45b6-8a13-b8d482d45c1a_1846x964.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p></p><p><strong>WATCH: <a href="https://www.youtube.com/watch?v=4zMeQaumZmY">Youtube</a> | <a href="https://open.spotify.com/episode/1IVCylpYbDAmEspb34bQtn?si=f57aa9687fff4bc3">Spotify</a> |  <a href="https://podcasts.apple.com/us/podcast/ep-72-the-bryan-johnson-interview-how-one-mans/id1624345213?i=1000644047616">Apple</a></strong></p><p><br>You have seen him all over the internet now with - the &#8220;tech billionaire who wants to live forever&#8221;. &#8220;the most measured man in history&#8221;. You&#8217;ve seen his photos and videos perhaps you&#8217;ve seen him go viral for everything from his Twitter replies. You&#8217;ve definitely heard of his daily diet and workout routine.</p><p>What I&#8217;ve come to appreciate Bryan is not that but his clarity of thinking, his ambition, his discipline and also his <em>courage</em> - this is someone who wants to help a lot of people and is willing to put himself out there. <br><br>We get into a lot of fun topics including: when he decided to embark on this journey and reject shame,  the relevance of the discovery of zero, his advice outside of diet and exercise, how he thinks of social media and much more.</p><p><a href="https://www.youtube.com/watch?v=4zMeQaumZmY&amp;t=380s">6:20</a> Bryan&#8217;s origin story<br><a href="https://www.youtube.com/watch?v=4zMeQaumZmY&amp;t=883s">14:43</a> The idea that changed Bryan&#8217;s life<br><a href="https://www.youtube.com/watch?v=4zMeQaumZmY&amp;t=1234s">20:34</a> Bryan&#8217;s power laws to living longer<br><a href="https://www.youtube.com/watch?v=4zMeQaumZmY&amp;t=1867s">31:07</a> Improving willpower naturally<br><a href="https://www.youtube.com/watch?v=4zMeQaumZmY&amp;t=2039s">33:59</a> Responding to online criticism<br><a href="https://www.youtube.com/watch?v=4zMeQaumZmY&amp;t=2251s">37:31</a> Creating the Don&#8217;t Die movement<br><a href="https://www.youtube.com/watch?v=4zMeQaumZmY&amp;t=2810s">46:50</a> Bryan&#8217;s blueprint explained<br><a href="https://www.youtube.com/watch?v=4zMeQaumZmY&amp;t=3020s">50:20</a> The future of gene therapy<br><a href="https://www.youtube.com/watch?v=4zMeQaumZmY&amp;t=3208s">53:28</a> Bryan&#8217;s physical appearance<br><a href="https://www.youtube.com/watch?v=4zMeQaumZmY&amp;t=3360s">56:00</a> Call to action to the human race</p><p></p><p></p>]]></content:encoded></item></channel></rss>