See full event listing

Generative AI and Product Strategy (Panel Discussion)

We invite an operator-turned-AI-focused VC, a product director at a series D startup, and a founder of an LLM infrastructure startup to discuss key questions like:

  • Are LLM capabilities over- or under-rated? What are commonly missed limitations of this new technology?
  • What are the common mistakes you’re seeing as companies attempt to incorporate LLMs into their products?
  • What are the net new use cases that can addressed by LLMs and which companies are best positioned to solve them?

Allison is a solo GP investor and independent board director, spending her time supporting the CEOs of SaaS companies such as dbt Labs (on the board), Jasper (the fastest growing SaaS company ever), Mutiny, Hex, and Tackle. She was formerly the COO at Gainsight and scaled the company from $1M in ARR to 650 employees, paving the way for its $1B+ exit to Vista in 2020. Besides building dozens of functions, she’s published hundreds of thought leadership pieces about Customer Success in order to build our software category

Jonathan is an experienced co-founder working in the computer software industry. He is skilled in AI, DL, ML, Python, statistics, and probability. He has a strong entrepreneurship background having graduated from The Cooper Union for the Advancement of Science and Art.

Vijay is a Product Management leader with 11 years of experience across consumer and B2B products, including Google AdWords, Inbox by Gmail, Jibo, and Heap. He is passionate about creating empowering products that push the technological and UX envelope, and more recently at Heap creating tools that enable product teams to move with agility and make informed decisions at the speed of their business. Outside of work, Vijay enjoys hikes, road trips, and taking cooking classes in every country he visits.

Matt Dupree is a wannabe philosophy professor turned wannabe tech entrepreneur. He’s worked as a software engineer, product manager, and CTO. His last real job was at Heap on the data science team. He’s currently building ATLAS, an LLM powered app guide that can give in-app walkthroughs for any user task.

Transcript

Matt Dupree 0:07
Hey, everybody, we’re back. Excited to moderate a panel of folks to talk about AI and product strategy. So first, we’re gonna have Jonathan, who’s the founder of prompt layer. There he is awesome. Jonathan, thanks for joining us, founder Brown, great talk earlier. And you’ve seen like a lot of interesting use cases for LLMs, like as a part of building out prop layer. And so really excited to have you here and kind of get your insights on the state of things. Next, we’re also going to have Allison Pickens. Yep, invest AI focus investor, and former CEO of Gainsight. And so really excited to have her perspective, you know, give given kind of her background. And then finally, we’ll also have, yeah, awesome. We’ll also have the Vijay, the director of product at Heap. Heap is doing some interesting things with LLMs. And in thinking really, in a nuanced way about prioritization and, and things like that’s a really excited to have his perspective are great. Thanks for joining us, everybody. So we’ll we’ll start, let’s start off actually, with something Jonathan said in his talk. So he, Jonathan, you said that, like the AI hype, is, is kind of calming down a little bit. And I think there’s there are other kinds of signals of this that you’ve like, aside from what you said, so like, I think I saw a Reuters article about ChatGPT usage being down. I think there’s been, you know, some decline of usages for other like aI native startups. So I’m curious, let’s start with this. Let’s just start with, why do folks think that there’s that that’s, do they agree with Jonathan? Like, do we think that there’s the hype is dying down a little bit? Or is this just like some weird seasonality? And maybe we’re, so let’s start there. Any thoughts on this?

Allison Pickens 2:10
I could dive in a bit. I can’t say I’m a macro expert, or an expert on trends. But I can share like my sort of limited perspective, I think, probably with the ChatGPT usage declines specifically, there’s probably some element of seasonality to it with students, you know, taking off the summer, I think students were probably a pretty significant, you know, percentage of total usage, I think there might be a part of this as well, which is that it seems likely that a major adoption cycles like this kind of are not linear, but actually are a little bit more bumpy than we would expect, particularly because I think people are trying to figure out how to incorporate LLMs into their workflows. I think that’s actually a huge opportunity for the future. And for a lot of startups as well, which we could talk more specifics later. But you know, think about ChatGPT, is that it’s a siloed tool, but if I’m trying to use it, and my marketing, and you know, creation of content for sales or customer success, use purposes, use cases, there might be, you know, more purpose built tools for that.

Matt Dupree 3:15
Go ahead, Vijay.

Vijay Umapathy 3:17
Just kind of like piggybacking off of that, like, I don’t think it was ever a reasonable assumption that like, the world was going to stop using the rest of the internet and only interface through the, like, information through like, a chat window. And like ChatGPT. Like I know, especially when they released plugins, like there was like, some people were really hyped about that. But I think like, frankly, you know, if you go back to like UX basics, right, like, you don’t want to have this high friction experience, to give an LLM the right context. And so I think, ChatGPT has probably inspired a lot of companies to realize the usefulness of LLMs. And I think we’re just getting started on the value of LLMs. And you’re right, like, a lot of these companies are trying to figure out how to integrate it into their workflows, because that is the easiest way to give the right context to these models to actually get value.

Matt Dupree 4:08
Yeah, that makes sense. So you guys kind of have more of like, a, you think it’s like an S shaped thing. Like, it’s not like, you know, just gonna go exponential crazy, or like,

Vijay Umapathy 4:16
ChatGPT is not a good measure, right? Like, I think I mean, like, I don’t think any of us really care about it as like, that’s like the key, like daily active users as like a measure of the usefulness of LLM, I think is like not a useful exercise.

Matt Dupree 4:29
I love it.

Allison Pickens 4:30
I think the primary value of ChatGPT was to educate the broader lay public on what LLMs can do to change their lives. And what’s more interesting, I think, is, again, the more like purpose built applications to come. I think it’s likely that a lot of folks got pretty excited about LLM for a period and sorted to try out some use cases and then realize that the workflow wasn’t sufficient. For example, several months ago, my husband and I decided to geek out on a Saturday night and just like try out new journey for the first time. Um, and, you know, if you use mint journey, it’s it’s not built with like a particular user in mind, particularly not a layperson. So we tried it out for a few hours, I eventually, like hired an intern to kind of like poke around and see if there were use cases for my fund or portfolio companies. But after that initial attempt, I think I’ll probably wait until there’s like some application that’s like, built into my day to day.

Matt Dupree 5:27
I see. I see that makes sense. Jonathan, did you have any thoughts? It looks like you unmuted for a second,

Jonathan Pedoeem 5:33
I was just gonna add that I think, for us, or at least for from our perspective providers to empirical things, right, like, we do see a transition of more mature users from just like, people flooding in hobbyist hackers that are coming in at the beginning. So there’s like, maybe less volume, but the quality is like, it’s now real companies that are coming through our door and talking to us. So that’s one thing. And then I’ll say another thing empirically, from my, like, non technical friends are like older people I know, like in community and stuff like that. They they, when tragedy was coming out, everybody was just talking about it. I use it for this email, I used it to write my daughter’s entrance letter to this camp or something like that. Then now like I even had a few people come to me, they’re like, yeah, man, he didn’t do so well on this attach that other tasks like now, I think the general public that was like flooding in, has started to see a tiny bit like, Okay, can I use this for my like, I have a friend who’s going into medical school, and he was using this for his like, you know, application letters or something like that, like, yeah, I can use it here. And now he realizes he can’t use it everywhere. So there’s a little bit of that. But to summarize, definitely a maturity of the users that are coming through our door, a lot more mature companies coming through.

Matt Dupree 6:50
Yeah, thanks for that observation. That’s interesting. Now that you mentioned it, I’ve noticed something similar, like some of the folks that have signed up for our waitlist, like early on, they’re like, Oh, I just saw you and like the AI newsletter and like, they’re not they don’t work for a particular company. Like they’re just, you know, kind of messing around. And that’s been less true lately. So I think that’s an interesting observation. Okay, cool. Let’s, so moving on a little bit. I think we kind of already got an answer to this. Okay. Wait, like, it seems like the group consensus is that chatty btw is like, basically a demo or a toy. It’s like, not really serious. But I’m curious if there if people do think there are any use cases where chat GBT will like replace, you know, more traditional uses of the internet. So like, people were saying this about Google for a while. I think that’s like worn off. But I’m curious if there is any sliver of, of use case or usage that will be displaced by ChatGPT.

Vijay Umapathy 7:41
Cheating on homework?

Matt Dupree 7:47
All right. Yeah. Fair enough. That’s yeah, that’s about the answer. I would expect to give him what y’all said earlier. That’s a good one. What was that there was like something? Oh, wasn’t there a professor that asked chat TPT if people were cheating, and it said yes, or something, and then it gave people zeros. Did people see this? Did I make this up? Did I hallucinate this? I feel like I saw something like this. Alright, alright. Nobody remembers. I hallucinate it. Sorry.

Vijay Umapathy 8:16
It’s tricky. I think it’s I honestly, I can’t think of a whole lot of them. Because in a lot of these cases, right? Like, you’re inevitably going to get better by providing better context. And you’re only going to do that if you’re embedded in actual workflows that have that context. So like, Yeah, I think I think the end state is like, very few things actually have, like, make sense in ChatGPT. And not like, you know, not in a separate tool.

Matt Dupree 8:42
I love it. I wish I met somebody last week, who was like, this is the end of the internet. Like, everybody’s just gonna use ChatGPT and I wish that he was in this room, so that you guys can duke it out. But um, anyway. Well, moving on. Jonathan, did you want to? Did you look like

Jonathan Pedoeem 8:57
I was gonna say, I think for me, at least personally, as someone who codes like I slowly use ChatGPT for some coding questions, like, and that’s honestly, it’s a good UX for that, in my opinion.

Vijay Umapathy 9:09
What about integrating in your IDE that like, I feel like for example, sometimes if I’m like, if I’m writing I was just playing around with it’s like few weeks ago, like, I’m writing something in a language I’m like, not that intimately familiar with and I’m, like, gonna make mistakes. I will like, paste something into chat. Or I’ll ask chatty petite or like, write me a boilerplate starting point, right? It’ll do that. And then, you know, inevitably you reach a point where you get an error, right? And then I go into my terminal and I copy paste the error that I found I said, I ran into this error, what do you think I should do? But like that step of hopping out of my context into my separate terminal application, copy pasting it back in like not necessarily having a good notion of like, the history and all that like that’s like a lot of work. I was doing that. If this was just in my IDE like this is why I think like copilot and like all these other tools are gonna be the center points. do this right, right. Yeah.

Jonathan Pedoeem 10:01
Yeah. All right, fine. I agree.

Matt Dupree 10:04
We we figured it out. ChatGPT is just the demo. It’s done. Alright, cool. So I mean, we hinted a little bit, I think that the answer to this question too. But I’m curious, like, what folks, you know, take his on whether current LLM capabilities are like over underrated. So Jonathan, you suggested that people are starting to calibrate on this a little bit more. Maybe there’s, you know, there’s the expectations are not crazy. Or maybe it’s just right, like maybe people, maybe you feel like people understand what these things are now. So let’s start there. Like, do we think they’re over underrated? Currently? I’ll say rockstars. Yeah,

Allison Pickens 10:44
yeah, sure. Um, I think they’re overrated in the area of where there’s precision that’s required in the answer, or some kind of convergent thinking on like, you know, meaning like thinking about concrete solutions. Interestingly, there’s been this emerging category of companies that have been trying to help people get access to their company’s data, like you, you know, to query the data and say, you know, how many of my users gave us, you know, a net promoter score of this, or, you know, what have you. And I think, you know, the expectations there were pretty high. And I think it’s hard for these companies to meet those expectations. Because really, when you’re chatting with generative AI, you’re chatting with someone who is supremely creative and is more attuned to divergent thinking.

Matt Dupree 11:35
Yeah, that’s, that’s a really interesting point. Okay, so overrated in this kind of convergent thinking area specifically? I think that’s right. And I like if I could say a little bit about, like, what we’re doing, like we’re trying to use LLM to help people learn how to use software. And I think a part of the decision to like, make that or mission is really a recognition of the limitations of like, a more ambitious, you know, like, you could you could try and make LLMs, that would like, just use the software outright. And that requires a kind of convergent thinking that I just don’t think is there yet. And so agreed there. Um, any other thoughts from other folks?

Vijay Umapathy 12:14
I totally second, this notion. I mean, so I work at a product analytics company. And so like we are, we are constantly exposed to everyone being obsessed with the, like, textbox interface to doing all analytics. And, yeah, I completely agree that like, it’s an environment where the cost of hallucination is really high. Like, it takes a lot of effort to build trust and data. And it takes very little effort to erode that trust. I do think, for example, tools that have code as an input, or like, for example, like, if you have like a sequel runner, right, or if you have like, something that’s like, really just, you know, hasn’t been, it’s like, syntactically specific that the average person may not be good at, I think LLMs can be really helpful with like, giving you a leg up into those kinds of inputs. But even then, it comes a lot of 10 caveats, right. Like, I would kind of break that into like, like that, like an LLM, did, for example, input SQL into like reporting tool, I think is solving two problems. And it’s doing one of them a lot better than the other. It’s solving a workflow automation problem of generate, like, you know, functioning SQL code, right. And it’s probably going to save you a lot of time there and do a decent job of it. But like, is it referencing the correct events? Is it like, you know, it’s like all the little nuanced pieces that actually gets you to, like, is this the right answer? I think those are the areas where like, those kinds of products are probably going to struggle a lot. And so that’s why I actually think that they may be overrated out of the box. But underrated if you if you put in the right investments in creating the right API’s that are underlying that, are there exposing access to that data? Right. I think if you do a good job of that, and and you invest a lot in that infrastructure of like, what can I do to give the LM the cleanest, most reliable data set I can to reduce hallucinations? I think like, I think in those respects, people maybe underrating how useful it can be the long run.

Matt Dupree 14:19
I think that’s right, and it actually does feel Yeah, some of what Zack was just talking about was like kind of investing in like the supporting infrastructure to make these LLMs really effective. And I do think you’re right that it could be underrated like kind of taking that tack. It’s kind of like an all or nothing like, if ChatGPT is an AGI like I’m not interested, you know, is like kind of the the sentiment which is, which is misguided, I think. Awesome. Jonathan, did you want to add anything before we moved on?

Jonathan Pedoeem 14:48
No, I think that was a good analysis.

Matt Dupree 14:51
Yeah, awesome. Okay, so I’m curious like, actually, yeah, this conversation flowing really nicely with like the flow of questions that I wrote, which I didn’t use ChatGPT for bye. the way that I’m curious if you guys have seen like any emerging kind of technical or product patterns to deal with the limitations of LLMs. So like, Vijay, you, you touched a little bit on building supporting infrastructure to make sure the right data is available to the LLMs. But if there are any other patterns, technical or product that you’ve seen, would love to see, or would love to hear about that?

Jonathan Pedoeem 15:24
Well, there’s one, good prompt engineering, right. So like having feedback loops, where like, you know, you see in production, where the problem is occurring, and then addressing that with whether that be prompt engineering, fine tuning those types of things. So there’s that like, you know, feedback loop in the wild, of like, you know, making sure hallucinations don’t happen. And the system’s not questioning, or doing the wrong type of data selection. There’s also people trying very, very hard to come up with like, like, you know, test sets and metrics to look at that will kind of like be able to tell you how well this will do in the wild. Although I’m not, I’m not sure, like, we’ve found something that can work, and be as flexible as people need it to be in production, beyond letting it run in the wild. And, you know, seeing where it makes mistakes.

Matt Dupree 16:17
So when you say like, you know, looking at the actual behavior in production, you’re almost talking about, like, prompt observability, or something like that, like being able to see, that’s, that’s interesting. Is that a part of what properly or does,

Jonathan Pedoeem 16:29
yeah, that’s it’s a big part of what we do is we have the feedback loop from, you know, iterating, on your prompts to seeing how it responds, right in production. So like, for example, if you’re a chatbot, and you didn’t anticipate that customers would, you know, use a specific type of language or something like that, and that triggers this this, you know, LLM to have a response that you don’t want. Right? So you see that in production, you anticipate you didn’t anticipate the specific edge case, another edge case occurred, you have that data to adjust your problem and deal with it.

Matt Dupree 17:03
I see. That’s interesting. Yeah, that makes a lot of sense. And I mean, you also mentioned test cases, like I’m hearing that a lot like up so what, one thing we haven’t talked about is this idea of prompt drift, or, like you have a prompt that works at one point, and then it stops working. And I asked Zach about this actually, in the last session, and he also brought up this idea of test cases and, and you know, kind of enabling folks to see when something stops working. So I think those are two there’s definitely seemed like patterns like given that Zach, and you brought it up and and actually spoke with another person a couple of weeks ago at another conference about, like trying to set up some sort of test case, or test harness for these sorts of things. Anything from you guys, Vijay are awesome.

Allison Pickens 17:46
You know, on the subject of what’s the emerging like technical or product landscape for like solving for hallucinations and other things, I do think it’ll be interesting to see what kind of guardrails products are built over time. And, you know, I’ve talked to some entrepreneurs who I think are interested in solving that as a horizontal problem, like, how do we just make sure there’s like a universal guardrail? I think that’s really unlikely, more likely, it will be very catered to specific use cases. For example, I encountered a company called no most the other day, which is initially focused on creating compliance guardrails for financial services companies in particular. So let’s say you’re, you know, consumer, you’re interested in certain banking products, you’re talking with, you know, chat bot about, like, what products might be useful that chat bot needs to say certain things about like the cost of the potential investment opportunities, and you know, what the returns might be and needs to be in compliance with certain company standards, as well as legal standards. That’s like a very specific use case, you could imagine financial services companies plugging into some kind of API and in order to access know most and then maybe like, you know, Grant, like having the know most brands on their company websites that people know, like, they’re protected by nomos for this specific thing. But likely, it’s not just going to be no most there’s going to be, you know, other other categories of like, you know, sort of guardrails that need to be built. And that might again, build like great brands around their trustworthiness.

Matt Dupree 19:16
That Yeah, that’s interesting. I mean, I think that does fit a little bit too with Zach’s talk before, like there a lot of the guardrails that he discussed, some of them are generalizable, but they feel like pretty specific to what they’re doing. So that that makes a lot of sense. Vijay, do you want to add anything? Yeah, I think like

Vijay Umapathy 19:31
so we’re a little like earlier, we’re like the POC stage or and one of the things that the way we’re kind of doing things kind of thing. One of the points that was made earlier, is trying to not test in like demo aware cases, but instead, like, kind of creating creating distributions and actually like creating as large datasets as you can. So for example, like when we look at a given use case, we’ll try and generate a we’ll start with maybe a test set of like one or two examples of something And then create test data, then create several variations of prompts, right, and then test those different prompts. And really, we’re not at the point yet where we’re, I mean, now I kind of go look up properly. But like, you know, as we scale, I think we’ll get to the point where like, granted, we’re doing a lot with a lot of this in the spreadsheet, right. But like, over time, as we scale, we’ll probably move to more systematically testing out lots of different prompts for that domain, then if that domain looks promising, start to scale out the dataset accordingly, right, and then try to make the dataset more and more representative of the real thing. And then like kind of separate from that. And by the way, one other one of the things is, in terms of like, the way we’re going about building these is, as we get directional signal that a certain domain is going to probably click really well. We’re using that to actually prioritize the infrastructure investments accordingly, right. So like, if you’re looking at like in an analytics tool, querying versus data governance, if the data governance use cases actually seem more promising and reliable, we will prioritize building more robust API’s for data governance over querying, right? Like, we haven’t gotten to like, a lot of those decision points yet, but like, that’s how we’re thinking about it. And approaching it is like, being kind of systematic, because it is really like, if you’re in a crowd of people watching it, it’s like, very easy to like, toss 50% of your roadmap and like, just be like, Oh, I’m so cool. Let’s go do it, right. And then, and then you end up like, you know, really sacrificing a lot of impact. And so I think it’s very important to like test methodically as you go, because the technology is evolving really fast. But it’s also, at the same time really easy to burn time on something that doesn’t bear fruit when it’s actually very easy to test.

Matt Dupree 21:44
Sure, so it just to be clear, you’re prioritizing the infrastructure investments, so that you can provide the proper context for the LLM?

Vijay Umapathy 21:51
Exactly. What we’re manually doing that in the near term for, for like, proof of concepts, right, sure.

Matt Dupree 21:58
Scale nailed and scale it, right. Like that kind of thing. Yeah.

Vijay Umapathy 22:01
And also, like the other lever to that we look at is like for the same task, how to cheaper models versus more expensive models perform. Sure Hello, eat, although it’s interesting, too, is like, those gaps are starting to go down. Right, like a lot of the like, the more performance, more capable models are getting, you know, exponentially cheaper. So well. Yeah.

Matt Dupree 22:20
And also, I mean, there are some things like I talked about this in my talk, like, there are some cases where like, three, five does better than four, you know, like, and so it’s like, 90, it’s not even like true that four is always better. Like, it’s like gets the right answer. And it’s cheaper, is what i mean,

Vijay Umapathy 22:35
By the way, I do see one common pattern, at least in my products, that of production is these things like I have seen like so either you take, I think everyone is kind of on the approach of like, I think most of what is out there is hitting open API’s API’s, right? Like it’s the fastest way to get into production, right? fastest way to like to ship something reasonable. And I think there’s kind of like a couple of categories like one is there just going for it with like a full chat domain? And like, it’s if chat is the relevant interface and domain for what you’re doing, like maybe like support automation type things, then like, you know, they’re just sort of doing it very directly. The other the other type is like, they’re, they’re basically like, using, should I totally forgot the other time was? I will, I’ll think about that a little more. I totally forgot.

Matt Dupree 23:27
No worries. No worries. Yeah. It’s tough. This stuff’s complicated. I think we can move on to the next question. And hopefully, Vijay will think of it before we get too deep into it.

Vijay Umapathy 23:36
I got some. I got I got several minutes to try to remember.

Matt Dupree 23:41
Yeah, that’s right. Yeah, I’m curious, like what you would say you hinted at this just now, like, an obvious mistake to make with LLMs. And product strategy is to just like, throw away your roadmap and like, build the shiny things like, and not really be thought about thoughtful about it. So that’s, that’s kind of an obvious mistake. But I’m curious, like, what other mistakes people are seeing, as folks try to incorporate LLMs into their products? Or, you know, Allison, maybe you could give more of a like, the mistakes that you’re seeing AI native startups do. Maybe you could give some of that perspective. Yeah.

Allison Pickens 24:15
Yeah. I think actually related to what Vijay was saying, I think there were there a number of companies that might have gotten started and 2022. So they’re not, you could call them like, pre weight pre ChatGPT way, but they’re still young companies that were in their pursuit of product market fit. And then the GPT thing became, you know, evident to everyone. And they felt the need, like their board was saying, you know, you have to look into this. You have to figure out how to use GPT. So they they diverted their focus away from the search for Product Market Fit toward how to just make sure that they have GPT, like embedded in the product in some way, like maybe a chat interface. And I think that’s a distraction. You know, like, I think if there’s a real way that you can take out costs for your customers. or, you know, draw attention in a big way to your app, because I actually think there are a lot of products that were built before the GPT era, but then suddenly got traction because they use GPT. Like, actually, gamma dot app is a great example of this. They were founded, I think, in 2021, that then got like, 10,000 new users per day in the several weeks after their launch a couple months ago, because they launched you know, with the relaunch basically, like with generative AI, so like, that’s an opportunity, taking out meaningful costs or adding a lot of value for your customers through GPT like meaningful opportunity. But if it’s not on your path, just searching for product market fit that’s it’s a distraction in my opinion. Sure.

Vijay Umapathy 25:43
Sure. One thing I think I’ll build on there that by the way, I remember the thing I was saying earlier, yeah.

Matt Dupree 25:49
Artfully don’t use ChatGPT to help.

Vijay Umapathy 25:51
Yeah, exactly. know, the, the other pattern I was saying which, which, which is kind of like, not super surprising, is I think a lot of people are trying to opt in versus opt out, or like default interfaces that like, so like, instead of like summarizing everything on a page automatically for you, they will like make you ask for a summary. Because that not only gives them a simple feedback loop of like, is this delivering value, which I think is like, I think that’s actually a very good thing. It’s also controlling costs. But like, that’s just like another pattern of kind of observe and see the different tools.

Matt Dupree 26:24
Got it? And I can see how that that can be used just to like, understand. Yeah, the limitations of the LLM, like, if it’s not providing value, people are not going to keep opting in to draft so you can kind of Yeah, I mean, we’re thinking about something like that, too, for hours, right? Like we’re guiding people to the right, the right place in an application. And if they’re not repeat users of that, that’s a pretty clear indication that way, you know, we got it wrong, or whatever. So yeah, that makes sense.

Vijay Umapathy 26:53
Going back to your other question about, like, you know, what are some mistakes that people are making? I think, Matt, like, one thing we were talking about earlier, today was, like, a lot of these, like, I think like aI platform companies, one of the things that I find, like fascinating to look at is like, so I think there’s like, there’s like two categories that maybe have two different types of mistakes. One is like the AI platform companies that are potentially building the feature of a larger platform company. And like, I think you kind of have to ask yourself, like, like, what are you doing that is like making making it like, what, like, what is your defensibility? Right, in this situation? I think the other the other category is like people who are building end user applications that are basically saying, Okay, we’re gonna come in and disrupt, like, X type of like, marketing content tool, or whatever, right, like, and we’re going to take an LLM based approach right. Now, I think the advantage that these companies have over the existing companies is that like existing companies, were maybe created in a world where they assumed they had no LLM for content creation, etc. So they would have staffed up and acquit, and accordingly built out a very, like high marginal cost approach. Whereas like, you have the opportunity to disrupt on costs, if you can create something that’s sufficiently high quality with a lot of automation, right. But that also means your go to market can’t be like, you know, sales lead, right? I think if you think that you really want to, if you’re if your ultimate advantage is going to be much better margins, you would better be going to market in a way that is much lower margin, right. So I think like, some of these companies that are like, okay, and we’re going to do an AI based approach to X, Y, or Z, and we’re competing with, you know, like, literally, if you’re making like a marketing tool you’re competing with, like, Canva, like, Canva can, like, wake up one day and decide, I’m gonna go build an LLM based approach, and have they have a massive distribution advantage, right. And so, you know, how are you going to counteract that right, like, and is your whole company, including the go to market motion, actually taking into account that advantage? I think that’s something that’s gonna be really interesting to see with a lot of these companies that are cropping up, because they are under a lot of pressure with all this, like venture funding that they’re raising to grow very quickly. And it’s like really easy to just be like, okay, the best way for me to grow quickly is to hire a sales team, which then fundamentally changes your unit economics. So you don’t have that advantage that you would have had otherwise. So I think like, I’m a big proponent of like, in these LLM companies, if they’re, if their advantage is going to be costs, like you got to also sell in a way that’s cost efficient.

Matt Dupree 29:30
Love it. We only have one minute left, time flew by. I love the comment. I’ve been thinking about this a lot. Vijay, we, you know, we’re wrapping up around like thinking about this kind of stuff. But so let’s ask one more thing. I’m curious, like, maybe we could just share, like resources that we consume for kind of keeping up with, you know, what’s going on with LMS product strategy, like, are there things that people kind of look at that, that’s really useful? Maybe we’ll end on that note

Allison Pickens 30:00
So, my quick tip would be I think there are so many newsletters nowadays that are talking about LLM. So you know, you could kind of pick your favorite one I think what’s potentially more useful is trying to use as many products as you can yourself like try it out experiment, get the firsthand experience and you’ll develop the skills as well that we’re all going to need in this new economy.

Matt Dupree 30:21
I love it taste the soup was it’s still a valued heat project. Yeah, tasty soup. Cool. Any other any other thoughts on on resources for like kind of staying up to date on this?

Jonathan Pedoeem 30:35
I think Twitter is pretty strong a lot of noise there a lot of noise. I completely forgot about Twitter at x threads, whatever. Mastodon or whatever the other one was called. Yeah, it there’s a lot of people putting interesting stuff there. And then I kind of use that as a way to like bubble up. Okay, now I need to try this out. Because there’s like every other day, there’s a new library and new this new that new technique. And then like, if it has staying power on Twitter, then you’re like, Okay, time to block off some time on my calendar and be like, let’s dig deep. Right.

Matt Dupree 31:13
Nice.

Vijay Umapathy 31:15
I like paper summaries on YouTube. That’s like some of like, like aI explained is a good one. There’s, there’s like a whole bunch of them. But like, I think the ones that are just like, they’re not like opinion thought pieces. They’re just like a paper was published. Here’s a summary of the paper itself on like 1000 examples. Here’s what my data set said. Good. You know, if you got a few a few minutes and get you through a lot of content.

Matt Dupree 31:39
Got it? Yeah, I’ll quickly share. Oh, go ahead, Alison, go ahead. Are you it looks like you’re unmuted. Okay. I’ll quickly share like I actually found we didn’t get to talk about this. But I think like the fundamentals of Product Strategy, I don’t think it really changed even though MLMs exist in the world. And so I’ve actually just found it really useful that just like revisit some like stuff, like seven powers is like a really interesting book on product and business strategy written by a Stanford like a former Stanford economist. And now he’s like doing VC stuff. But I’m finding that stuff extremely useful as I think about the shifting landscape with LLM is like it’s the fundamentals are the same, but there’s like some stuff that changing up here, but knowing the kind of principles and fundamentals has been really useful. So I will share that as my resource. Cool. I think we are. We are out of time. Thank you guys so much. This was a delight. And I think Aaron’s gonna come on here and and yeah, there she is. All right,

Erin Mikail Staples 32:37
first off, this is a great panel. I learned a lot and can plus one the panel review, there are paper reviews on YouTube, because that’s actually my background noise a lot of the days is like, gotta get this paper like it’s the only way I feel like it can stay up on top.

More Awesome Sessions