Krish Ramineni (@krishramineni, CEO/Founder of Fireflies.ai) talks about what it is like to build an AI product company in both the pre-LLM era as well as post-LLMs. We also discuss privacy and security concerns and AI behind the scenes.
SHOW: 805
CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotw
NEW TO CLOUD? CHECK OUT OUR OTHER PODCAST - "CLOUDCAST BASICS"
SHOW SPONSORS:
SHOW NOTES:
Topic 1 - Welcome to the show. Before diving into today’s discussion, tell us a little about your background.
Topic 2 - Our show and listeners tend to be interested and employed in the Enterprise infrastructure and AI/ML space. Some may find it surprising that we are talking today, but we wanted to really dig into how an up-and-coming AI company provides value at scale from individuals all the way to large enterprises. What goes into both building the product as well as taking that product to market? So, let’s start there. You recently posted about “Free AI” on LinkedIn. What was the problem you were trying to solve, and how did that influence the product you built?
Topic 3 - As the foundational models in the industry keep improving and are going multi-modal, do you worry that the LLMs of the world might push out specialized models? How do you think about staying ahead of the curve? How does something like GPU shortages or big companies like Meta purchasing thousands at a time impact your decisions?
Topic 4 - Fireflies.ai is all about the abstraction of the technology away from the user. They have no idea (and shouldn’t) about the back end and everything “behind the curtain”. How do you think about this abstraction layer from a product standpoint?
Topic 5 - Now, let’s talk about PLG vs. traditional Enterprise software sales models. You did another post about that recently. We’ve worked in environments selling both (sometimes at the same time), and they are very different motions. Do you feel both are needed to build an AI company?
Topic 6 - How does Security and compliance with IT departments fit into all of this? I’ve spoken to customers that have a policy of no AI tools at the personal level for instance or maybe client, company and private data might be at risk and only certain tools are vetted and approved. I’ve seen other companies only allow tools licensed by their corp IT. How do you navigate this issue? How does something like GDPR play here?
Topic 7 - Last question, another AI specific concern we hear about is companies training models on user data. What is your thoughts here? How does a company fine tune and train new models and products but keep customer and company privacy from leaking out?
FEEDBACK?
Create your
podcast in
minutes
It is Free