Geoff Ralston, founder of SAIF (Safe Artificial Intelligence Fund), and former President of Y Combinator, shares his vision for building a safer AI future. Geoff discusses the risks and promise of AI as a force beyond traditional tools, posing AI as a set of entities that will reshape the way we work, live, and relate to each other. He talks about biosafety, interpretability, and misinformation as key focus areas for innovation. Geoff also shares advice for founders navigating this fast-evolving landscape and reflects on how thoughtful investment today can shape the future of humanity.
Geoff Ralston, founder of SAIF (Safe Artificial Intelligence Fund), and former President of Y Combinator, shares his vision for building a safer AI future. Geoff discusses the risks and promise of AI as a force beyond traditional tools, posing AI as a set of entities that will reshape the way we work, live, and relate to each other. He talks about biosafety, interpretability, and misinformation as key focus areas for innovation. Geoff also shares advice for founders navigating this fast-evolving landscape and reflects on how thoughtful investment today can shape the future of humanity.
In this episode, you’ll learn:
[02:05] Why Geoff believes AI is not ‘just’ a tool but a cognitive force reshaping humanity
[06:29] The subtle but profound difference between tools and intelligent agents
[13:56] Who wins and who loses in an AI-driven future, and what roles must investors play?
[20:36] Can we still design a utopian future with AI?
[24:06] The types of founders Geoff wants to back through SAIF
[26:30] Why mission-aligned safety startups still need product-market fit
[28:46] What happens when AI does everything—and what humans will still choose to do
The nonprofit organization Geoff is passionate about: AI Venture Lab
About Geoff Ralston
Geoff Ralston is the founder of SAIF (Safe Artificial Intelligence Fund) and former President of Y Combinator. A longtime startup investor, entrepreneur, and thought leader, Geoff previously founded Imagine K12, an edtech accelerator later merged with YC. With decades of experience launching and scaling category-defining startups, Geoff now focuses on funding companies that ensure AI becomes a force for good, addressing challenges around safety, security, and the future of human work.
About SAIF
SAIF (Safe Artificial Intelligence Fund) is a venture capital firm dedicated to building a safer future with AI. Founded by Geoff Ralston, SAIF invests in startups focused on AI safety, biosafety, interpretability, and information integrity. The firm supports mission-driven founders creating scalable solutions to counteract risks and ensure that AI technologies empower rather than endanger society.
Subscribe to our podcast and stay tuned for our next episode.
"I am more interested in products and services that counter some of the things that can go wrong. So when I get excited, I get excited by companies that are building biosafety products, which are exciting because when something really bad happens, when dangerous bio agents might be created, mediated by AI, these products can come in and protect us, can cut off pandemics, for example, before they happen." - Geoff Ralston
[00:00:35] Gopi Rangan: You are listening to The Sure Shot Entrepreneur - a podcast for founders with ambitious ideas. Venture capital investors and other early believers tell you relatable, insightful, and authentic stories to help you realize your vision. Welcome to The Sure Shot Entrepreneur. I'm your host, Gopi Rangan. My guest today is a good friend of mine, Geoff Ralston.
Geoff recently launched a new venture capital firm called The SAIF, the Safe Artificial Intelligence Fund. He was previously the president of Y Combinator. Also the founder of Imagine K 12, which merged with Y Combinator. And prior to that, he's been a very successful serial entrepreneur and a leader in the Silicon Valley.
It's a pleasure to have Geoff. We're gonna talk about AI. What is artificial intelligence? What is exciting about artificial intelligence, and what are some things that we should be worried about with artificial intelligence and how can we make the world a safer place going into the future? I have lots of questions. Let's jump into the conversation with Geoff.
Geoff, welcome to The Sure Shot Entrepreneur.
[00:01:47] Geoff Ralston: It's great to be here, Gopi. I'm excited to have the conversation.
[00:01:51] Gopi Rangan: I'm looking forward to it as well. This is a very pertinent conversation given where we are in the world today. Let me jump straight in. What is exciting for you with artificial intelligence today?
Why is this such a big deal now?
[00:02:05] Geoff Ralston: I don't know how to answer the question about how exciting it is. I'm both incredibly excited about this point in time, which I believe is a seminal point in time, a seminal moment in human history, as well as super anxious about it because there's so much that is unknown and worrying about this particular time because it's going to be a time of such incredible change. For the first time in history, mankind has created a tool which is more than a tool, Gopi. It's an entity. It is something that is cognitively more capable than most humans today. Today, ChatGPT or Claude or any of the other large language models, knows more than any human being, can answer more questions on more topics than any human being.
Now that's notable. We should stop and think, "wow, that is new. We can now have a conversation with something that's not a human. And first of all, not really be able to tell at some fundamental level within ourselves that it's not a thinking mind that we're talking to. And also get enormous amount of useful knowledge and information from that mind."
That's never happened before. And the change that took us there has happened incredibly quickly. And this is the reason that I make the claim that now is a moment in time different than any other moment, and that the change coming will be extraordinary. The capabilities that we'll have with this new tool, this new entity or these new entities are amazing.
The possibilities are great. Just the ones we already knew about the, the ability to replace human drivers with self-driving cars. That's incredible. You know, a million people, a million worldwide die every year in automobiles. This is why Mark Andreessen said it's tantamount to murder, to slow artificial intelligence down because once you replace all of those human driven automobiles with self-driving cars, that number will drop precipitously. That's amazing. That's great. We'll be able to solve some of the most intractable problems facing humanity with these new tools and these new capabilities and the new power that comes with this technology.
But by the same token, there's a lot of uncertainty. Will these new entities take jobs from humans? And at what level will they take them? Will they create capabilities to do dangerous things, maybe awful things to humanity and how will we prevent that? That's why I started my fund companies who will hopefully build protections for us to build tools, products and services that make the future that is coming.
It's coming regardless of what you want to believe, more likely to be beneficial for humanity, more likely to allow human beings to flourish in every part of their lives. So yeah, I'm excited about so much because it's changing so much and I'm anxious about it because no matter what anyone tells you, nobody can quite predict exactly how this change is going to roll out and impact all of us.
[00:05:39] Gopi Rangan: It's very interesting, your choice of words. You're calling this not a tool, but a set of entities with its own mind. We can actually interact and ask questions and learn and improve ourselves also. As human beings, we learn much faster and better in a very customized way. Millions of lives can be saved when these entities or tools go out into the world in a positive way.
But I understand the fear part a bit. I want to get to the fear part, but I want to talk about why you call this not a tool. Isn't it similar to what happened when the mobile revolution came through or when the internet happened? Those are all tools, maybe platforms. How is AI different from the changes that we saw using technology in the previous few years?
[00:06:29] Geoff Ralston: I think that's an astute and great question, and one a lot of people are struggling with and there's a lot of disagreement as to whether AI is something different quantitatively, substantively different from the tools created, the technologies created in the past, or whether it is normal. There was a recent paper called AI is a Normal Technology that says, no, you're making a mistake. It's not as much of a quantum leap as you think it is.
Throughout history, people have thought about machine intelligence as something different, as something that would transcend what we normally think of as a tool; a hammer, a saw, a spreadsheet; which is driven by human beings to something more autonomous, more independent, more like a mind quite separate from the human operator of that mind. In 1950, the British mathematician, Alan Turing, wrote a paper on machine intelligence, and in it he created something he called the Imitation Game. The imitation Game was meant to help you determine whether you had something different, something more like a machine intelligence, an intelligence that ought to be differentiated from a tool.
Now we all call that test the Turing test now, and it was a test where when you can't see who your interlocutor is, who you're talking to is you're usually on a keyboard typing and you can't see who's typing on the other end. And if you can't tell Turing's perspective whether that entity you're talking to is a human or a machine, then you might as well assume it's a mind. You might as well make that assumption. There's a lot of criticism on the Turing test and there's this concept of artificial general intelligence, A GI, which was what that test was supposed to, I think originally determine.
And now we've raised the bar because it is clearly true that an LLM can pass the touring test if you set it up correctly. It's actually interesting. The reason an LLM wouldn't pass the tour test today is because it knows too much. You know, if I'm sitting there talking to you, Gopi, and I start asking you really abstruse questions about biology or ethnology or geography or geopolitics or history or anything, and you know, all the answers I'd sort to suspect maybe there's not a human on the other side. But if you ask the LLMs to do a Turing test and to pretend like they're a human, well, they dumb themselves down and then you can believe they're a human. So we've raised the bar for what artificial general intelligence means now, and it's something that can do anything a human can do, at least as well as a human can. That's why I think it's a different sort of tool. It's increasingly independent.
This might be the year of the agent and you hear much about agentic technology in 2025 because agents run off and do things. If you're a programmer like I am and you use the tools of today, cursor or GitHub copilot, or many of the other tools that help you program; almost all the LLMS help you program, it's mostly working with you. You type some code and it finishes the code and sort of understands what you're doing and writes more code, but you're really directing it. It really feels more like a tool. But an agentic version of that, you sort of say, "Hey, go write this code, go build this website, go create this application", and it just goes and does the whole thing. Now your prompt might be a little more sophisticated than that, but it really feels. It's different, right? It's separate from you. It's going off and doing it and coming back and reporting much as you would to a human programmer who was working with you on a project.
So there's a different feel for how an independent entity acts works and thinks with you than we normally think of as a, a tool, which is really kind of, you know, like a hammer in our hands that we're directing. It's a sometimes subtle, but I think substantive difference between the two.
[00:11:00] Gopi Rangan: A tool is something that improves efficiency and humans have control over how and when the tool is used. But in this case, with AI, it's not only about improving efficiency, it's actually changing the way we think and behave and my agent can talk to your agent and design our lifestyles that we wouldn't even know what happened until much later. And that seems quite strange and scary.
[00:11:30] Geoff Ralston: I don't think the dividing line between tools and independent agents is, is how they impact us. So just as an example, social media has changed the way many human beings think.
Mobile technology has changed the way humans think and act. So I think technology can indeed have profound effects on our psychology, how we act, even who we are. But I do agree that once you can have a companion, a friend, a confident who is no longer a human being, that as well will have perhaps a more profound effect on human behavior, human psychology that we don't fully understand yet. We didn't fully understand what was going on, especially with our youth and social networks and the connected world, and it took us really decades to start to get a handle on that.
This is going to happen faster. And be more profound and have us worry. Now, I mean, there's a super positive impact to that because there's a lot of lonely people, and if lonely people can be less lonely, that's wonderful. Maybe they'll be happier and they'll even be more productive members of society. But also you can imagine cases where people pull back from society and from humanity because some ways an AI companion will be better, less human in positive ways. They won't be irritable, they won't have bad days. They'll always be positive. They'll always be supportive. It's very hard for us to be the perfect companion, be the perfect person because of our humanity and our failures.
Our failures are in some sense what make us human. And at the same token, our failures are kind of what can be really annoying to other people. And so I think the consequences predicted and unintended of these new entities playing a role in human affairs is concerning.
[00:13:27] Gopi Rangan: A few years ago, I came up with the idea that why don't I teach a system to be the best version of me so I can ask the system for advice, which is basically me giving advice to me at my best times so I don't have to worry about getting bad advice from incorrect sources and I trust myself.
In your mind, who are the winners and losers in this change that's happening with artificial intelligence? Who benefits and who doesn't.
[00:13:56] Geoff Ralston: Well look, artificial intelligence can solve some of the hardest problems facing us, and we all benefit from that. So if artificial intelligence helps us cure diseases that we had difficulty with; cancer is the canonical example; then we all benefit Gopi. And so I think safer cars, curing diseases, helping us deal with intractable problems like climate change, that helps all of us for sure. There are generally, however, winners and losers in every technological shift. In this particular one, it's a little unpredictable who's going to win and who's going to lose. But one prediction that's been made that you can read about is that if it is true that artificial minds take over most of the economically productive work in the world over time, then that might lead to a very economically static future where there's very little social mobility, which sort of means the winners are anyone who goes in with an economically advantageous position stays in that economically advantageous position. And anyone who goes in with a disadvantaged position, well they might actually do better. They might still benefit, but it will still be very static. And the haves will have, and the have nots will have more, which is great, and maybe everyone will have enough so that it's fine, but maybe there'll still be this gap that persists and will persist indefinitely.
But that's not the only scenario. Maybe there's a scenario where there's more equality because maybe everyone is wealthy and everyone has what they need. And in 2004, Ray Kurzwell wrote this book that I think many technologists read called The Singularity is Near. When he said near, he meant decades. He was predicting in the late 2030s and 2040s, there would be this creation of super powerful computers and machine intelligence. His point in calling it a singularity was that it's just super hard to predict what comes. He made predictions anyway, and his predictions were mostly that it would be incredibly beneficial for humanity. And I hope sincerely that he's right. But the bottom line, what we should all take away from this is you will hear lots of predictions for what's coming, but it's very hard to get those right right now because the change will be so profound.
[00:16:36] Gopi Rangan: Technology's changing so rapidly that it is definitely much harder to predict what's going to happen in the next four or five years than it was 20 years ago.
Are you concerned that the cleavage between haves and have nots will deepen and widen?
[00:16:51] Geoff Ralston: That has been an effect of automation in the past. Just if you think logically, at one point in time, people will build factories, and even if they did it to the minimum extent possible, they were forced to share the wealth with all the factory workers.
You could argue that they should have shared much more and it wasn't until we had trade unions and other ways that that power shifted from managers to workers that it became more fair. But nevertheless, that wealth had to be shared in some sense. But if you can replace all those factory workers with robots you only have 10 people running the factory instead of a thousand, then clearly more of that wealth goes to the management, the owners, and that can create wealth inequality. Similarly, if you have a company where you have a thousand software engineers and you can replace those thousand software engineers with agentic entities that can do the same work without requirements for much pay or food or places to live, then again, that's more concentration of wealth. So you can imagine a scenario and people are already imagining the scenario where you have billion dollar companies built with a tiny number of employees.
You've already seen that somewhat happening with technology. I think WhatsApp was purchased for $19 billion when it had about 50 employees. Think about that, right? And so you can imagine very large companies being built with very few employees. So I do think that's a concern, but as I was just saying, how that really plays out over time.
Is unpredictable because every layer might get replaced, and then maybe you have companies that are entirely artificial. And then, then what happens to the results from the wealth created by those sorts of companies? Who owns that in fact, and how does that get distributed? And thinking about how wealth gets distributed in the future is going to be extremely complicated. That's why there's a lot of conversation about things like UBI, universal basic income, but remember it's mostly basic income. So again, how wealth is distributed in the future it's sort of a mystery even now. How it gets distributed in the future is an open question I think.
[00:19:24] Gopi Rangan: Don't you feel like tech has been largely responsible? Of course, not a hundred percent of the time. Now, when we talk about wealth creation, if you look at the Industrial Revolution, a big portion of wealth was accumulated by very few people, but stock options and equity ownership has distributed wealth much more evenly among employees and workers in the tech industry.
The tech industry has been a leader in creating value, but not very good at capturing that value. In fact, a lot of the value that was created with the successful tech companies was captured by the market. Companies like Google and Amazon and many others that went public, they created far more wealth after they went public compared to the wealth that was captured by the tech leaders that created those companies in the early days.
So largely, tech has played a role so far that has been a net positive. Some of those things are changing. Are you worried that now we're going into a future where, tech is becoming so powerful, it influences politics, it influences democracy, it influences job creation, that we need to think about this more carefully?
[00:20:36] Geoff Ralston: You make an interesting point. It is certainly true when a company like Facebook slash Meta or Google goes public and anyone can buy the stock whether they're in the tech industry or not. Wealth is created for all of those folks, as you call it, market, as those companies go up.
If you think about Berkshire Hathaway and Warren Buffet, sure he created a ton of wealth for the employees of Berkshire Hathaway, all the employees, the, the many, many, many thousands of employees and shareholders of Berkshire Hathaway. So this engine of growth is great for the economy overall. And clearly the employees and founders of tech companies benefit extraordinarily from the ownership of the pieces of the company that they own, as well as do all the other folks who get to own the company 'cause they sell that, the ownership, the pieces of the company very broadly.
And even the example I gave where you have a company that is entirely agentic, there's no humans involved at all, if it lists itself and you or I can buy shares and that company creates enormous wealth, then we will benefit from that as well. So you can imagine society still benefiting from that wealth creation.
It's strange though, right, and entirely unpredictable how that is going to play out. Some people are saying, I think even Sam Altman has made these predictions of many multiples of growth in GDP. And when that happens, the overall economy that, you know, boats will rise with that and anyone who's part of that economy will benefit and there'll be a lot of wealth hopefully to share amongst everyone and hopefully to build an incredibly beneficial society where things like universal healthcare are obvious because we want everyone to be healthy and well in the society.
And so it is not too hard to imagine more utopian outcomes from this technology and this wealth creation and the capabilities of artificial mythologies can confer on humanity. My goal is to help us figure out how to get there as opposed to look, there's also downside scenarios that you can imagine. There's other things that AI enables that aren't so great, like incredible surveillance states how that can watch all of us all the time and or well in versions of the future are easily possible and being created today in states like North Korea and China. So how do we avoid that future yet welcome the future that you can imagine being really wonderful where people are free to do what's in their best interest and what they like to do and don't have to worry about the sort of things that in the United States is really endemic to, you know, if you lose your healthcare, what happens to you if you get sick? There's hundreds of thousands of bankruptcies every year in the United States now because people get sick. This is pretty unfortunate state of affairs and, and maybe AI will help us to a better future. That's what I hope.
[00:23:49] Gopi Rangan: This is such a fascinating time to ask the question, how do we design the future? And I'm so excited about, what you're going to do with SAIF and the kind of founders you're going to support.
What are specific teams or topics you are excited about when founders come to you to talk about starting a new company?
[00:24:06] Geoff Ralston: Well, because I'm in the business of thinking about safety and security, that question's a little different for me than for people who are saying, oh, we're creating this new AI future that's going to, to immediately benefit by making people more efficient or creating things people want, which is Y Combinator's model. I'm more interested in products and services that counter some of the things that can go wrong. So when I get excited, I get excited by companies that are building biosafety products, which are exciting because when something really bad happens, when dangerous bio agents might be created, mediated by AI, these products can come in and protect us, can cut off pandemics, for example, before they happen. I'm excited about interpretability products that can help us understand why the decisions made by these rather mysterious entities were made. You might notice that that interpretability is starting to get built into the large language models as it figures things out, it tells you what it's doing and what its sources are. I think those are critical components to making these entities more understandable and the directions they're going in more understandable, so we can make sure they go on directions that are beneficial. Super excited about tools and products that make misinformation, disinformation, deep fakes all problematic creations of artificial intelligence, more manageable, more understandable, making the world and the factual nature of the world more reliable.
Think those sorts of things are critical. To making a beneficial future, and I'm super excited to fund founders who are working on those and other areas of AI safety.
[00:26:02] Gopi Rangan: Very, very interesting topics indeed. What's your advice to founders who are building in these spaces? It's hard enough to build a startup, a successful business, and specifically in this topic, biosafety, interoperability, preventing bad things from scaling very quickly.
How do you stop that? When founders are building businesses focused on those topics, what can they do to position themselves to be more successful? What's your advice to them?
[00:26:30] Geoff Ralston: Well, thankfully my advice for founders doesn't change based on the domain they're in, whether it's AI safety, AI productivity, FinTech, what have you, which is you need to identify a real problem that people have or will have, then create a product solution that will solve that problem. As I mentioned earlier, Y Combinator's motto is make something people want. Paul Graham came up with that because he once created something people didn't want, and he said, oh, it's really better to create something people actually want. So if you're creating a safety mechanism that no one wants, it won't be successful, it doesn't matter. And maybe you belong in a nonprofit or in a different domain. But if you can figure out how to build product that not only will people want, but actually will act to improve the beneficial nature of artificial intelligence, then you've got a shot. Then you've got a possibility of raising funds, of finding product market fit, and of building a real company and hopefully a real company that can scale quickly.
That's kind of my presumption in funding SAIF artificial intelligence companies 'cause I think things are going to happen so quickly and that change is coming will be so fundamental that if a startup is going to have an impact, it's going to have to scale quickly.
Now look, the good news is that the speed of scaling has accelerated, right? I think ChatGPT went from zero to a hundred million users in like eight weeks and I think Threads went to a hundred million users in like eight days. And I think ChatGPT's next a hundred million or some hundred million came in hours.
So scaling has gone vertical. That's one of the things I'm counting on because I think some of these safety companies are going to have to scale vertically in order to have the requisite impact to really make a difference.
[00:28:27] Gopi Rangan: I'm very curious to ask you this question because you talked about so many things that AI can do for us.
Universal basic income will take care of us, but that's still basic. So what's the role? What happens when AI does everything for us? What do we do?
[00:28:46] Geoff Ralston: Economists will tell you that in every technological revolution there has been a flowering of new jobs and new opportunities. And then they'll ask the question, why is this different? Why will this be different? And maybe it won't. Maybe they're right, but there's concern that because we're creating cognitive agents that are in some sense, equivalent or superior in capability to humanity, that this is different. That any new job you can imagine, you have to ask yourself. The fundamental question, why is it that that job is more appropriate for a human, better done by a human than by some artificial machine intelligence. And I think it's a very difficult question. So one can't imagine a future where most of the core economically valuable jobs are done by AI agents.
What does that leave for humanity? It's easy to be a little hopeless in that situation, but lately I've started to come to the conclusion that we will want humans to do certain things. There will always be a certain number of jobs that are human-centric because that's what will appeal to us and that'll be fine. We're okay with being less good chess players, even though the AI's will always be better chess players. We'll always have human athletes and and human artists because that's what we'll want. And we'll always have restaurants where there are humans making our food and serving us because that's what we'll want there. There might be completely automated restaurants as well, but, but maybe we'll all gravitate to the old kind, the current kind, because that feels better for us. And so I do think there'll be this flowering of new/old jobs that will want to keep in the human domain, and perhaps there's some things that I can't imagine yet that will always be better for humans to do. I'm not sure what those are. I do think there'll be a large section of the economy given over to artificial agents, and maybe that'll be fine too.
[00:30:46] Gopi Rangan: I think that time is coming soon, and I'm very curious to see how we're all going to evolve. Geoff, we're coming towards the end of our conversation, and I wanna ask you about your community involvement. Is that a nonprofit organization or community activity that you are passionate about? Anything new you wanna talk about?
[00:31:05] Geoff Ralston: It turns out that most of the AI safety work up until now has been in the not-for-profit realm, and, and I think a lot of those are doing amazing work. I'm also working with our shared alma mater, INSEAD, with our artificial Intelligence venture lab, which is trying to prepare for the AI future help prepare founders for that AI future.
I think, uh, my, another ter of mine, my undergraduate alma mater, is thinking hard about artificial intelligence and how we can contribute to that AI future. Both schools are thinking hard about leadership and what leadership looks like in this AI future. I think that's incredibly important and it's deeply personal and important to me to work with them.
And there's other organizations that are trying to do the less obviously for-profit venture, sort of work necessary to protect humanity as we enter into this future. There's, for example, a, a new group called the Golden Gate Institute, which is trying to create important conversations around AI that I'm spending some time with.
And there's a whole bunch of organizations that are dedicated to creating a beneficial future for humanity with AI and I. For me personally, working with for-profit companies that have perhaps a better chance of scaling, a better model for scaling, it's not a bigger impact, an important impact on creating this safe future for all of us in the presence of what we know will be there, which are these very, very, very powerful artificial intelligence technologies that are going to impact us in every aspect of our lives. So I think, my goal is to do whatever I can with these community organizations, these nonprofit organizations and the for-profit organizations to help ensure this beneficial future.
I, I think I, I owe that to the future. I owe that to my kids and to everyone's kids. Are going to be firmly in this future we're creating.
[00:33:15] Gopi Rangan: Geoff, it's always a pleasure talking to you. Thank you very much for sharing your deep thoughts and your perspectives on how AI is gonna shape our future. I look forward to sharing your nuggets of wisdom with the world.
[00:33:28] Geoff Ralston: It's always great talking to you, Gopi. I appreciate it and I appreciate everything you're doing. Take care. Talk again soon.
[00:33:33] Gopi Rangan: Thank you, Geoff.
Thank you for listening to The Sure Shot Entrepreneur. I hope you enjoyed listening to real-life stories about early believers supporting ambitious entrepreneurs.
Please subscribe to the podcast and post a review. Your comments will help other entrepreneurs find this podcast. I look forward to catching you at the next episode.