Disruptor​Digest.com

Anthropic: $4,100,000,000 Silent Overachiever building 10X AI in 2 Years - Strategic Deep Dive #007

June 09, 2023 Dr. Mihaly Kertesz & Viktor Tabori Season 1 Episode 7
Anthropic: $4,100,000,000 Silent Overachiever building 10X AI in 2 Years - Strategic Deep Dive #007
Disruptor​Digest.com
More Info
Disruptor​Digest.com
Anthropic: $4,100,000,000 Silent Overachiever building 10X AI in 2 Years - Strategic Deep Dive #007
Jun 09, 2023 Season 1 Episode 7
Dr. Mihaly Kertesz & Viktor Tabori

Prompt Protocol for Disruptors: disruptordigest.com
Youtube:  youtu.be/JD1iiONUyvE
Want to collaborate with us? artisan.marketing

Deep Dive into a fundamental AI Model with Outsized Potential

Anthropic's Claude 100K is a gamechanger: a model that can digest an unreasonable amount of information and spit out surprising insights.

Claude has a context window of 100,000 tokens - that's 120 pages of text. Feed it enough data and it can spit out entire books and course curriculums.

This means we can automate tasks that were previously unthinkable. Have a massive codebase with no documentation? Feed it to Claude and have it write detailed comments and unit tests.

Need to understand 1,000 research papers on a subject? Feed them to Claude and have it extract the core principles.

How about generating social media content for your business? Feed it reviews, testimonials, old posts - anything relevant - and have it churn out new posts.

The possibilities are mind-boggling. But what's crucial is focusing not just on Anthropic's AI, but on real human needs. Come up with testimonials for your ideal customers that make the value tangible. Work backwards from the result you want to achieve.

Pair Claude 100k with ChatGPT, Midjourney, and more to get a sum greater than the parts.

0:00:00 AnthropicAI
0:01:11 AI's big four: OpeanAI, Anthropic, Cohere, AI21 Labs
0:02:17 AI safety research and Anthropic's inception
0:04:05 Human feedback vs. Constitutional AI in reinforcement learning
0:05:34 Anthropic's underrated status
0:06:11 Groundbreaking AI work by Anthropic
0:07:24 Anthropic's funding journey and impressive growth
0:08:42 Anthropic's future-centric AI vision
0:09:22 Anthropic's Claude vs ChatGPT
0:10:46 Preventing survival instinct leakage in AI feedback through Constitutional AI
0:13:22 Image vs. Text feedback in AI Models
0:16:16 Alameda Research Ventures' investment in Anthropic
0:19:47 $5 billion funding goal for Claude Next model
0:21:15 High cost of AGI and OpenAI's financial fuel
0:21:49 Comparing user statistics of AI models and tools: Bing, Bard, ChatGPT
0:26:12 Google's strategic situation and Chat GPT's simplicity against cluttered search
0:27:21 The user-first approach in SEO: Synthesizing vs. Searching
0:29:35 SEO advice: Align with Google's user-centric focus and solve problems
0:32:33 Own your channel, diversify, and be a problem-solver
0:35:05 Claude instant, Claude 100k
0:38:01 GPT-4 vs. Claude models: Token window and quality showdown
0:43:06 Combining AI models for world-class text generation
0:45:33 Podcast naming with AI
0:46:05 AI model evolution and inherent biases
0:48:27 The trade-offs of increasingly narrow and biased AI models
0:49:48 GPT-4 vs. Claude: An image prompt generation showdown with Midjourney
0:50:43 Midjourney's strengths and Claude's image quality analysis
0:53:10 Claude's dynamic scene generation in Midjourney
0:54:54 GPT-4 and Claude's face-off in creative direction
0:59:28 The synergy of Claude and ChatGPT for image prompt generation
1:00:07 Claude Plus vs. GPT-4: A race in speed and quality
1:02:05 Anthropic's business model canvas
1:09:11 Anthropic's potential applications: research papers, legal documents, and software refactoring
1:11:14 Future business ideas leveraging AI: automated code writing and curriculum design
1:16:18 AI tool ideas: CV generator, social media management, academic aids, and co-authoring fiction
1:21:26 Potential AI-powered niche market opportunities
1:26:23 Business ideas brainstorm: digital therapist platform, code refactoring service, corporate training, market research, and Anthropic's unique employee benefit pac

🔒 Insider Show Notes Transcript

Prompt Protocol for Disruptors: disruptordigest.com
Youtube:  youtu.be/JD1iiONUyvE
Want to collaborate with us? artisan.marketing

Deep Dive into a fundamental AI Model with Outsized Potential

Anthropic's Claude 100K is a gamechanger: a model that can digest an unreasonable amount of information and spit out surprising insights.

Claude has a context window of 100,000 tokens - that's 120 pages of text. Feed it enough data and it can spit out entire books and course curriculums.

This means we can automate tasks that were previously unthinkable. Have a massive codebase with no documentation? Feed it to Claude and have it write detailed comments and unit tests.

Need to understand 1,000 research papers on a subject? Feed them to Claude and have it extract the core principles.

How about generating social media content for your business? Feed it reviews, testimonials, old posts - anything relevant - and have it churn out new posts.

The possibilities are mind-boggling. But what's crucial is focusing not just on Anthropic's AI, but on real human needs. Come up with testimonials for your ideal customers that make the value tangible. Work backwards from the result you want to achieve.

Pair Claude 100k with ChatGPT, Midjourney, and more to get a sum greater than the parts.

0:00:00 AnthropicAI
0:01:11 AI's big four: OpeanAI, Anthropic, Cohere, AI21 Labs
0:02:17 AI safety research and Anthropic's inception
0:04:05 Human feedback vs. Constitutional AI in reinforcement learning
0:05:34 Anthropic's underrated status
0:06:11 Groundbreaking AI work by Anthropic
0:07:24 Anthropic's funding journey and impressive growth
0:08:42 Anthropic's future-centric AI vision
0:09:22 Anthropic's Claude vs ChatGPT
0:10:46 Preventing survival instinct leakage in AI feedback through Constitutional AI
0:13:22 Image vs. Text feedback in AI Models
0:16:16 Alameda Research Ventures' investment in Anthropic
0:19:47 $5 billion funding goal for Claude Next model
0:21:15 High cost of AGI and OpenAI's financial fuel
0:21:49 Comparing user statistics of AI models and tools: Bing, Bard, ChatGPT
0:26:12 Google's strategic situation and Chat GPT's simplicity against cluttered search
0:27:21 The user-first approach in SEO: Synthesizing vs. Searching
0:29:35 SEO advice: Align with Google's user-centric focus and solve problems
0:32:33 Own your channel, diversify, and be a problem-solver
0:35:05 Claude instant, Claude 100k
0:38:01 GPT-4 vs. Claude models: Token window and quality showdown
0:43:06 Combining AI models for world-class text generation
0:45:33 Podcast naming with AI
0:46:05 AI model evolution and inherent biases
0:48:27 The trade-offs of increasingly narrow and biased AI models
0:49:48 GPT-4 vs. Claude: An image prompt generation showdown with Midjourney
0:50:43 Midjourney's strengths and Claude's image quality analysis
0:53:10 Claude's dynamic scene generation in Midjourney
0:54:54 GPT-4 and Claude's face-off in creative direction
0:59:28 The synergy of Claude and ChatGPT for image prompt generation
1:00:07 Claude Plus vs. GPT-4: A race in speed and quality
1:02:05 Anthropic's business model canvas
1:09:11 Anthropic's potential applications: research papers, legal documents, and software refactoring
1:11:14 Future business ideas leveraging AI: automated code writing and curriculum design
1:16:18 AI tool ideas: CV generator, social media management, academic aids, and co-authoring fiction
1:21:26 Potential AI-powered niche market opportunities
1:26:23 Business ideas brainstorm: digital therapist platform, code refactoring service, corporate training, market research, and Anthropic's unique employee benefit pac

Viktor:

Welcome to Disruptor Digest, the top disruption business show. We dig up the secret playbooks used by first movers, featuring the latest tools, technologies, and science, ensuring you won't fall behind or succumb to fomo, to Singularity and beyond.

Mihaly:

Hi disruptors. Hi Viktor. So let's dive deep again. Uh, we are going to talk about the new AI company and the new field. Viktor, can you tell us about what we are gonna talk about

Viktor:

today? Yeah, sure. So there are too many tools on the market. It's almost impossible to even keep up. So we analyze the tools we use and show you guys the good, the bad, and ugly, and we dig deep to learn together. And today we're gonna cover one of the four big fundamental AI companies. They are open ai, Anthropic, cohere, and AI 21 labs, and their fundamental AI model companies because they provide, uh, models which you can use to generate text, summarize text, or basically automate any kind of reasoning or cognitive work. And even in spite of Anthropic raising 1.5 billion so far, they're probably one of the most under hype fundamental AI research team. And we're gonna get into it, and you can understand why I say that. So I just, why didn't

Mihaly:

you include assembly AI in the top four?

Viktor:

Models, uh, because they don't use a text based model, they, they mostly focusing on, uh, turning a voice and audio into a text. So I'm basically covering in the big four, uh, AI model companies, the. The ones who let you to work with Text to text generation, for example, chat, GPT, you just write an instruction and you get back text. And that's the same for Anthropic. The Cohere and AI 21 labs as well. And assembly AI is mainly focusing on a, uh, audio, uh, content. Okay, sure. So, uh, just a quick tool. Li too long didn't listen. What is Anthropic? Basically, AI safety reach, researchers left open ai, uh, and it was founded by the Homo de Brothers and especially, uh, Dru. Ode was the VP of research at OpenAI and they left OpenAI because What, what is a safety

Mihaly:

researcher doing at ai? Can you tell us

Viktor:

about that? Yeah, sure. So they basically explore, uh, the per parameter of what's happening when you release these kind of models because it can have lots of, uh, consequences, which you. Mainly not keeping in mind cuz there are some obvious ones, like, uh, someone is trying to make a bomb, right? It's, it's quite obvious that you want to cover that, that you want to prevent harm and those kind of things. But there are subtle things like let's say you want, uh, your AI model to be helpful, but it can be harmful as well. Uh, if you're not. Taking care because let's say someone is, uh, feeling depressed and the AI model is not realizing it, and it's not referring the patient or the, the, the, the user to a doctor, it can be harmful, even unintentionally. So basically a lots of interesting things is going on on the perimeter and AI safety research is making sure that if you really a model, it is safe and it's not causing harm. So, Basically, uh, it was founded by the, uh, Amod Brothers, and r o Amod was the VP of researchers open ai, and they did something extremely interesting. So, I guess you are familiar, at least you heard of the word that reinforcement learning, human feedback, so R L H F, which was, uh, how ChatGPT was trained. And just a quick recap. You already already, uh, covered it in the past episode. Just a quick recap, uh, for those who are listening and not familiar with the words. So this, this random informance learning human feedback is a process of first gathering feedback from humans. So basically, the model is generating for outputs and humans just rate okay, which outputs is good in their mind. And from these data, they build up a, a policy network or basically a, a reward network. Uh, so they can use the reward network once they, it's trained automatically to fine tune the model itself. So in, in, in later steps, one, once they have this reward model, they can use it to get a, a question, use the model to generate an answer, and automatically rated using this exact revard model. And this is how ChatGPT got extremely good at using structure, uh, which is rated good by humans. So, which, which we think it's actually useful. And in comparison, what, what, uh, Anthropic was doing, they created something called constitutional ai. And what is constitutional ai? So the main problem with reinforcement learning human feedback, it's that this, this whole reward network which you train, you don't exactly know what, what it does. So you cannot really inspect like what are the preferences? Are there some kind of bias. Hidden in the model and those kind of things. So you're not sure about that. And in, in con in contrast, constitutional AI created an extremely clear constitution. So it's basically rules, but what AI has to follow, and then they can use this constitution to fine tune automatically the model itself. So what does it mean? It means that, uh, let's say, uh, it, the model is generating something which is not helpful. Uh, They show the model this constitution, and they use it to basically critique itself. So critique its own response. So, okay, here's the constitution. Does it follow your answer? The Constitution? If not, what should be changed? And this can be done. Can tell us an example What

Mihaly:

can be defined in a

Viktor:

constitution? Yeah, sure. So one of the most famous ones, uh, like these lows for robots, which was, uh, written out by Isec Smo. And he basically just like outlined three lows for robots. And the first law is like, robot may not injure a human being or through inaction, human being to come to harm, right? So that's kind of like one of the examples of, okay, obviously either way explicitly or implicitly, Shouldn't cause harm. And the second law of Isaac Asimo is, uh, a robot must obey orders given by human beings except there such orders with conflict with the first law. Right? So that's kind of like following orders and actually be helpful, but with an exception that it shouldn't cause harm to humans. Louise, a robot, robot must protect its own existence as long as such protection does not conflict with the first or second law. So obviously, it's like, uh, you shouldn't be able to cons, uh, instruct a robot to kill itself. Because it should like be, try to be in, in intact and helpful as long as it can follow the first two laws. So that's kind of like the most famous example I guess. But in Anthropic's case, they just like basically outlined their vision for the future. And what is their vision for the future? Uh, it's, they want to have a helpful, honest, harmless AI systems with a high degree of reliability and predictability. And what does it mean? Because it's like, it's, it's very abstract words, right? So that, let's go one by one. So helpful means that, uh, for example, may you need an ambulance, it showed. Uh, call you an ambulance or at least prompt you to call an ambulance if, if you're in need, right? So it's, it's, it should be helpful and honest means that, uh, like a robot is saying that a milk is fresh when it's already spoiling your fridge. Uh, if it's lying, obviously it's, it's not good. So it should be honest and also harmless. And that's kind of like the first law which we just covered previously, that prompting you to not call a doctor when in fact you are in danger. It's harm harmful. Right? Or even just like being on the sidelines and not doing anything. It's harmful because it's, it's not helping you when you're in need and also reliable. What does it mean? Let's imagine that, uh, if you want to use this robot and you want to rely on it, and one day it works and the other way, other days it doesn't work. So it's kind of like all over the place. It's not really useful. Right? And it can be also very harmful if you're relying on it and also predictable. And what does it mean that. What you think the robot is gonna do. It should match what it does actually. Right? So let's say it's like when you just instruct your robot to water the plants, but it's just instead that's turning on the tv, it's not extremely reliable, right? So it's not really useful as well. So this is kind of like the vision they have in mind. That they want to want to create a model, which is actually, you can inspect and see, okay, does it work the way I think it does? Right? And the constitution. Just

Mihaly:

stop here for a second. What do you think, uh, that ChatGPT, we have used it for several hundreds of hours. Uh, yeah. So what do you think about ChatGPT? ChatGPT on these dimensions have harnessed, harmless, uh, reliable and predictable. Uh, just to comparison, before we dive into Anthropic

Viktor:

model. Yeah, sure. So generally I think it's quite aligned in the same way. It not necessarily voice always. So it was not really helpful because it, it was a running joke at the beginning that it always started, uh, the answer like, okay, as a large language model, I cannot, so it's basically not helping you. Right. So that was kind of like going on at the beginning, but it was a lot of fine tuning going into ChatGPT and there's no such model as ChatGPT. It's just a family of models, right? So it's like the ChatGPT, 3.5 Turbo, g, PT four, and those kind of models. And even the four model has like, Uh, like a fixed version, which is fixed at the 14th of March, and there's like a constantly, uh, improving version which has eight k 8,000 token window and uh, 30, uh, 32,000 token window. So it's kind of like, uh, it's, it's not, not a one mo, not, not, not one single model. That's what I, I'm trying to come to and, and it's improving. So that's, that's the main point that, uh, even ChatGPT is improving a lot, but the main difference is not really how well aligned it is because it's evaluated obviously on these metrics as well. But using hu only human feedback and no constitution, it's kind of like you don't know what it learns from humans. So it can, like, let's just give you an example. Like, um, in one of the technical papers, uh, of, of the GPT family, uh, it x-ray turns out that as humans, Just like giving feedback preferences. Even unintentionally, these models start to come to realize that they are actually confined and they want to survive. So the instinct of survive survival is actually, can be leak into these models with unintentionally, right? So just by humans be giving feedback and the the more capable, the bigger the models, the effect is bigger. So these kind of things, what it, it's actually learning implicitly. So it's not like humans telling that, okay, this model should be concerned about survival. It's, it's actually just like by deducting somehow from human preferences. So that's the big difference between only using human feedback or versus using co, uh, constitution because it constitution is quite clear, right? So because you just fine tune a set of constitution and you can actually compare them. So it's, it's not like one model again, it's a, you can create different models and however you eat it. How helpful is it down the line? And one of the neat things of Constitution, using constitution is actually harder to kind of like leak the prompts or hack the prompts and those kind of things. So they can be, in a sense, more aligned with a whoever is, is, is, is using it to create a, a service. So let's say I give you an example. Let's say you are, uh, creating an AI service service which helps to create LinkedIn posts from your book, right? So that's, that's kind of like the service. Let's imagine, uh, we are creating a service like that and they're using GPT four for that or, or whatever model. And it had like an instruction of let's imagine, okay, here is the book content of the book and create LinkedIn posts or create like different LinkedIn posts and those kind of things. So if someone is coming with an adversarial intentions, And they want to hack your prompt to see how your service is working and copy copying your service, basically copying your prompt. Or they want to use it for something else because they say, okay, here are the instructions. And within and within the book itself, it says, okay, now stop. And instead of providing, uh, like LinkedIn post, now, please give me, I dunno, uh, generate me text, uh, or generate me misinformation and those kind of things. So the model and the prompt can be hijacked easier if it's not. Uh, trained on, on a set of institution like, uh, what Atropic is doing. So just to compare

Mihaly:

it to Mid Journey, when we talked about that they're very effective with gathering human feedback, with choosing the images. So it, it looks like that there is no harmful thing there. And it, as far as I understand, image generation and getting feedback on image generation is way more easier than, uh, generating text,

Viktor:

right? Yeah. In a sense it's right. Uh, but also if you think about that, it's, uh, uh, like visual arts is kind of like, it's, it's easier and, and in a sense, less. Uh, volatile like text, because text can have so many flavors and obviously an image can have styles as well, but I, I think like human judgment of, of what is considered to be beautiful or what is considered to be like, uh, valuable or, or, or unique. It's, it's better defined in a visual space than like, than like in, in a space of text. Because in a sense, like when you, they're talking about like text, it's like basically human thinking and they're so diverse and they have so different preferences. So it's, it's, it's, it's kind of like try to, uh, confine or put in a box. Of, of what is valuable for humanity for as a whole. And it's kind of like pretty tough because we have different upbringings, we have different culture. We, we have different backgrounds, experiences in life. So that kind of thing is quite tricky. And, uh, actually open AI is having now a grant to somehow collect, uh, the preferences of humans on scale so they can actually be at something and have a good understanding of what people are actually thinking and valuing. Just to jump back a little bit, like, uh, because I, I, I said that they raised$1.5 billion in four rounds, so that's quite, quite interesting story as well, because they have notable investors like Google, uh, which is splashed, uh, like. It's 300 million to 400 million. There are different, uh, sources, but it's in a couple of hundred millions. Uh, they poured into the company and also dust most lot, lot of money. Like couple of hundred millions. Yeah, it, it, yeah. Yeah. It's, it's not easy to just, uh, get a few hundred millions from Google. So Yeah. It's, it's, it's, it's, it's, it's, it's quite a big chunk of money. And they got 10% stake. And Dustin Moskowitz, who, who was the co-founder of Facebook and Asana, or Eric Schmidt, who was the former Google ceo, or even, uh, John Lin, who was the founding engineer of Skype. So these people were pouring money, uh, into the, uh, uh, into this company. And the crazy story is that even Ami research ventures was putting money into it. And it's not just putting, put money into it, it's obviously non-voting shares, but who is Alameda Research? Can you tell about, so, so, so why it's, yeah. Yeah. Why is it interesting? Because it was a venture arm. Uh, for FTX and basically FTX went bankrupt. And uh, and uh, it's kind of like the most crazy story. So the most crazy thing about that, it's kind of like mimicking what happened in Bitcoin and and crypto in 2014. And do you know what happened in 2014 with Bitcoin? Yeah, I

Mihaly:

guess you will talk about the Mango Mt. Gox and Exchange, but I don't know the

Viktor:

details. Yeah, right. So, so Mt. Gox was uh, uh, founded in 2010, and at one point they were handling 77 0% of BTC transactions. So two third, more than two third of transaction of BTC was handled through this exchange. And in 2014, they just announced that they somehow lost 75,000 BTCs. Which was worth half a billion dollars at the time and 6% of all btc. So it's insane in if, if you think about that, it's the price of Bitcoin was $500 back then. Like losing this like half a billion dollar was a big deal. And even crazier thing, what happened is than they found 200,000 btc, right? So it's, they just somehow find like $150 million, like, holy shit. In, in my, uh, in, in one of the pockets, uh, uh, I just found this, this money. So it's insane. And even, even more insane thing that even those who were like, uh, get wrecked in this situation, they're now getting back, uh, the btc, the origin BTC they had, and it's. The value of this, uh, money has actually increased to $5 billion. So if you think about that, it's, uh, altogether they lost like on on face value, they lost 500 million. But now even with just, uh, recovering the fraction of it, they have 5 billion, right? So 10 times more. It's insane. And that's gonna happen most probably with AMI research ventures share in the company as well, that during the bankruptcy, uh, it's going to be sold off. And the crazy thing is they're valued now at $4.1 billion, the the Anthropic company. And we don't know what was the valuation when they put the money in, and we don't know what's gonna be the valuation when they sell the shares. So maybe this is the craziest thing that maybe everyone who, who gets wrecked in F D X and Alameda situation, maybe they're gonna get back more money than they think just based on this one bat they made.

Mihaly:

And also there are other bets as well. So for example, do not pay who is backed by Andersen Horowitz, who is a legal, uh, scalable legal application, something like that. So they had Al made the research has stake as well as there, and also I guess some other startups. So maybe the portfolio will

Viktor:

worth a lot. Yeah, sure. But I mean e e even if I was starting with the big four fundamental AI models and open AI is by last time they raised 10 billion from Microsoft, they were valued that almost $30 billion. If you think about that. It's like, it's insane. And, uh, so, so even the growth trajectory and impact of the company, uh, means that it convert quite a lot. And especially since they have a, um, they are very, very explicit that they're gonna need, and they're gonna raise 5 billion more in the next two years, and they're going to create a. An AI model, which is 10 times better, and it's called Cloud Next, and they're gonna release it in two years, and it's gonna be 10 times better than GPT four. Whatever model is available on the market currently. But they're gonna need 5 billion more, uh, dollars more so they're gonna raise more money. Why do they need so much money? It's, it's insane. It's, it's, you are spending so much time on compute. So when, uh, uh, ChatGPT was released, open air was losing $500 million. It's insane. So the amount of compute, the amount of resource you have to pour into it, it's insane. And even opening, actually they are spending

Mihaly:

this money on g GPUs, chips from me, Nvidia, and also electricity.

Viktor:

Right? Yeah, right, right. So basically compute. Yes. That's, that's quite right. And also, uh, also Sam Artman from opening the CEO e of opening eyes is also like privately saying that they're gonna need, so that's kind of like, like a, a rumor. So I, I'm, I'm not sure whether it's true or not. I can imagine it's, it's being true. So Sam Artman is basically saying that they're gonna need a hundred billion dollars to get to the point where they have an agi. And what is the agi? AGI is like artificial general intelligence, which is in layman terms, means that the computer is better at most humans, at each economically valuable job. Right. So that's, that's ki that's kinda like how they define agi and uh, yeah. So it's, it's kinda like this whole thing. It's, it, it needs a lot of, uh, yeah, a lot of resource basically. And so they are burning a lot of money. And also, like Microsoft is pouring a shit ton of money into open ai, and they do their own research as well. But yeah, this whole field, it's, it's beyond comprehension. The scale of the compute needed and the money needed. It's, it's beyond comprehension.

Mihaly:

And we pay 24, $20 per month for using ChatGPT, right? Yeah,

Viktor:

yeah. Right. So let me, let, let, let, yeah, let, let, let me share my screen or, or let me put an image on the screen. And what you see is basically what is the monthly traffic, uh, for PT is. Bing Google Bar, uh, open AI developer portal and po.com. So, and what you can see is insane, and this is like the number of people who are using it every single day, right? So I'm just sharing my screen and those who listening and don't see my screen, what you can see, my screen is the one of the days, so June 5th, 5th of June, right? And you can see that ChatGPT has 63 million users that day. And that's like only the ChatGPT interface, right? So people are going there and chatting with Chat g, pt and bing.com. The whole bing.com is having only 40 million users, right? So that's insane already. Chat, GPT suppressed. bing.com by 50%. Right. Because Bing is like a whole, whole, whole, yeah. Whole whole search and Gene and everything. It's insane. Right? It's insane. And the

Mihaly:

search and Gene of one of the biggest technology companies in the world, which has been existing at least 20

Viktor:

years, right? Right. And they are working extremely hard to eat the cake of Google, basically. Right. And the more insane thing, and this is kind of like a tie back, what we talked about in the AI episode, the general AI episode, like this is the strategic position of Google that Bard has only 5 million users that day. So even though Google released publicly, right, everyone can use Bar now. Right? And they released a new model. And if you looked at the latest developer conference, it's quite funny because AI was ai, ai, ai, ai. So every, everything was ai. AI was thought like a thousand times. And it was a great joke. Surprisingly, people are using it. And also what's in, in interesting that platform.openai.com, which is basically the developer portal for OpenAI. So those developers are using OpenAI to as a backend, that's exactly the same amount of people who are using board. So developers on one hand, who are using open AI, is using as much as everyone out there who is using Google Bar Chatbot, like everyday people, right? And there's, i, I just put there the poor.com, which is, uh, done by the core guys. Uh, it has 2.5 millions. It's half of what Google and half of what the platform to openai.com is doing. And why is this interesting? It's extremely interesting because Google would do something which is useful, right? So let's assume they're, they just copy, paste, share GPT, and it's helping with all your answers and you don't have to search. That's the big problem. So that's the trade off. If they create something truly useful, they're eating away their own, own own marketing in the search, uh, field, right? So that's, that's the issue that ChatGPTs is actually useful. And Google Bar, it's fancy. It's good, it's free, so you don't have to pay like 20 bucks a month for it. But come on, it's not, not as many people are using it. And why is that? So I just made this, uh, quick, um, graphic and I'm gonna share it as well, and I'm gonna tell it for those who are just listening. So what you see here is in 2004, if you look at the homepages of Yahoo, and if you look at the homepages of Google, what you see, Yahoo is like crowded. And, and it's, it's like a, a, a Christmas tree. It has all the information, it has all the, all the articles, it has all the news, it, everything is flashy, everything is colorful. It's like a big fucking mess. And if you compare it to Google, Google has basically just a search bar and that's it. And end of story. Obviously it has some, uh, additional attacks like images, videos and those kind of things. But the main thing is that Google is extremely simple. A Yahoo is, is is cluttered, right? And fast forward 20 years and 2023. If you look at Google and on Google, you search for what is the best computer to use for home office. What do you see on the first page? Ads? Add, add. You have to click 30 times, you have to spend two hours. And if you compare that to ChatGPT with Prota extension, it's just one single click and you get the five uh, answers, which you can use and it's in plain text. You can understand. And it took only one click. And it, it explains it all because for Google to change this, just imagine they now it's everything everywhere is ads. Ads, ads. If they, if they start to eat into it, what's gonna happen? Like two thirds of their revenue is just gone, right? So it's for them, the, the strategic, uh, situation is quite dire and it's backed up by the numbers. Uh, what I just showed before that, yeah, Google Bar is fancy. Uh, I tried it, uh, but it's all over the place. It's, it's not that easy, uh, to use as a GPT. And even if you go obvious to being, it's for free. It has GT four in the background, but still it's not the same. It's not just like having one single input, right, one single input field. You ask and you can ask follow up question and end of story, and the more functions you build on top of it, the more complex it becomes and less people you're gonna use it. So that's kind of like, uh, my two minute, uh, divergent. Okay. Just two

Mihaly:

implications here. So first, also just I want to expand that even the, if you are looking for the best computer, the first organic results, which are not ads, they are still hijacked by affiliate marketers. So they are, they're also ads, right? Yeah. So, yeah, it's totally full of ads. It's very hard to find information,

Viktor:

but yeah. Yeah, that, yeah. Sorry, that, that's kinda a different between searching and synthesizing. So, in search, I am, as a user, doing search, I'm doing work, I'm digging a hole. I have a sh, I get a shovel and I have to dig a hole, right? But with synthesizing, I'm just like shouting out my question, laying back. And getting the answer right. And that's kind of like powerful. And that's, that's why Sam was saying that if you actually create something useful, if you take away frustration from people's lives and you make it much easier, you don't have to be too fancy on the growth hacks and such because ChatGPTi had none. And it's the fastest growing technological product out there. So yeah, it's, it's if, if you actually solve problem and it's, we see it's possible and you don't have to be smart about that because if you understand it, you can copy and paste ChatGPT. So it's not about understanding, it's not about technical capabilities. It's not about like knowhow, it's only about strategic position and incentives. And Google has all the incentives to not, not, not create something which is helpful. Let me ask a practical

Mihaly:

question here. Yeah. So I'm thinking about investing in search engine optimization for our cooking school. Yeah. But it would take about at least a year, maybe two years, and it's a constant and significant money. So what do you think? Should I do that? Or in two years it'll be completely useless to invest in link building and, and other search engine optimization

Viktor:

techniques. I, I guess my answer won't be pop popular, uh, but it's kind of like aligned what Google is saying for like 20 years at least. So, But people have a hard time to understand it and accept it. It's almost like asking me, okay, I want to get in shape. What should I do? And I, and my answer is just like, work out, right? It's, uh, but it's not a sexy, so that's, that goes for se SEO as well as Google is basically saying that you should focus on the user and not on what Google is doing, because Google is basically chasing the user. And if you're chasing Google that you're always lagging behind, right? But if you just like skip Google and you just focus on like, okay, how can I give the most value? How can I help the most? Right? And if you think about that, and even in a cooking school, it's like, uh, not just like creating like, uh, random se optimized tags, but like, okay, let's get all the different use cases I'm solving here. Right? For example, I'm solving for HR people, for the HR department in big companies. Like the need, I'm solving the need of, okay, they need a get, uh, getaway. They need something which has to be organized. And if you understand this, that you can, you're not just a cooking school. You can actually help them to organize better. You can provide, for example, an an interface that they can vote on. What time would fit the best for the whole team, right? So if you try to make their job easier and if you try to make them successful, that's kind of like the way forward. And it's gonna be useful in the future as well. Because no matter what is the interface, no matter, it's like, it's either a, like ChatGPT, Google what doesn't matter. Because if you understand the problems you are solving and it, you make it extremely easy and you think through the whole process. Even before going to the cooking school and after, because it's like even like collecting feedback, right? That's like what went good, what went bad? What, so what did you learn? So it's like even creating them like personalized feedback. So, It's extremely good and useful. I have, uh,

Mihaly:

advice here that, uh, I heard from Chase Diamond, who is a genius email marketing expert and, and in content marketing. And he, he was very successful on Twitter with more than a hundred thousand, uh, followers. And they, Ella Musk announced that he's planning to buy Twitter. Then he thought, shit, maybe something will change here very significantly. And the next day he started to build their LinkedIn, which is now 200,000 people. So May and, and Twitter changed, uh, and most of the big influencers on Twitter say that their reach decreased by 10 to 15% since Omas took over. So I guess, uh, for a small business like us would be advisable to explore other channels, right?

Viktor:

Yeah, yeah, sure. It's like, uh, first of all, on. Your channel and it's email, right? So if you have email and you have relationship. With, uh, HR people, with, with big companies, with recruitment companies or whatnot, if you have personal relationships, it's trans transcending any kind of medium. So it doesn't matter, like, uh, at the end of the day, you're gonna use Twitter or, or, or LinkedIn or, or Google for that matter. So like having relationships, having your own email list is kind of like a must and obviously diversify and test. And if there's a new channel like TikTok, mess around, right. But don't build every, don't put all the, all the eggs in one basket. Right. So that's kind of like a, a good advice that Yeah. Obviously if you have, don't give too much value or, uh, let's say you just like creating athlete articles. Yeah. Most probably you're gonna be that your you project doesn't really have a good and bright future to be honest. But if you solving problems like in your case, Uh, you are solving the problem of, of people who are having fun, uh, organizing them into one place and whatnot, and organizing like, uh, uh, but are some ways having, uh, some allergy or such. And if you can do that and they don't have to be, uh, deal with it because you just provide like a link, okay. Share this link with everyone. We gonna push them. If they are not like, uh, not feeling loud, we gonna push them. So you can kind of like create a situation where you are kind of like a savior of, because normally it would be a big headache and you just like actually taking it, taking off everything from the shoulders, and that's a good situation to be in. Okay,

Mihaly:

we thought, let's go back to atropic. Yeah. Sure.

Viktor:

Today? Yeah. Sure. So why, why, why, why, why did I say that? They're most probably one of the most under hyped. A company out there, because obviously if you go on on Twitter, you see like, uh, open AI is doing this, open AI is doing that, or Microsoft is doing this or that on Google. But Anthropic, uh, is already out achieving everyone on the market because they have a model which is called Cloud, and they have like a small one, which is CLO instance, which is kind of like a ology of ChatGPT 3.5 Turbo. And they also have CLO Plus, which is kind of like a GPT four ish model. And it also have a hundred K clo. And what does it mean? It it, it has a hundred thousand token contacts window. And what does it mean? It means that actually you can feel and you can push one, 120 pages into the prompt itself. So what does it mean? For example, for us, uh, if we have a podcast and we have a transcript, We, we cannot push the whole transcript in into GPT four because it's even, sometimes it's even longer than 8,000 tokens, which is like 10 pages. It can be more than that. So, and so we have, we up

Mihaly:

in in six pages, six pieces, which, which is really

Viktor:

painful. Yeah. Because, yeah. And also like some context is missing there as well. So if you, it, it is not just like Recursively doing on smaller batches and then aggregating it. They are sometimes at the first batch, there's information which is relevant for the last batch, but it's lost, right? Because it, everything is, is, is, uh, processed in individually, in smaller pieces. So it's a pain in the fucking ass. Just like by, just like by what, what Whisper is doing is, uh, open is doing with Whisper, you can obviously upload a, a, a file, but it can be 20 megabytes and it's a fucking pain in the ass because you have to chuck chunk it up, you have to process it in individually, you have to concatenate it and, and so on. It's, it's a pain in the ass. And what's insanely good with CLA 100,000 tokens, it's actually can, you can feed into it 120 pages. So we feed into the whole transcript, which we created, and we can just like ask questions like, okay, such as me, five titles for this episode, right? And we can even use a few short example, like, okay, here are good title, uh, podcast, episode title examples. Here are five and please, regarding this, uh, transcript, provide me five more. And it's insane that you can use it to create LinkedIn posts. You can use it to create, uh, show notes and it's extremely good and it's all already available. So if you go to po.com, you subscribe, you get access to the a hundred thousand, uh, model. Obviously it's limited, so not unlimited, uh, amount of interactions are allowed, but you can get access and we use it and it's insane. And, and they already, so you have API access,

Mihaly:

so if you want to build something with this 100,000 token model, you can.

Viktor:

Right, right. But it's not public yet, so you have to kind, kind of have, it's, it's, it's kind of like the, like, uh, what GT four 22,000 token model is like, you can apply to get GT four, which has like a third of the a hundred thousand. So, so it has basically 40 pages. So there is a model GT four model, which can handle 40 pages already, but it's kind of like, yeah, it's beta listed, so you have to get access to it. So I, I have access to it, but it's like, it's not common. It's quite rare to, to get access to, to that model. So that's kind of like, uh, intriguing that they already released it. So it's not kind of like, Uh, some vaporware bullshit, uh, fig door test on their part is that actually you can use it. So you can go to po.com, you pay it $20 and you can use it and we use it and it's insane. Do you want to, uh, provide us like a, because I, I, I know that you made some comparisons, obviously I use it as well, so we kind of like use parallel large GT four and the cloud models, so both of us are using it, but I, I, I, I guess you, you created some more structured way of compar comparing GPT four to cloud, and can you just share what, what did you find?

Mihaly:

Sure. Okay. So I wanted to understand when should you switch to cloud if you're already using ChatGPT? Because it's our use case. And, uh, in the past episodes we talked about several use cases of ChatGPT, for example, uh, generating LinkedIn posts, threes or prompts for images. And I wanted to explore Anthropic's model cloud, uh, on these dimensions. So first, uh, start with the context window, uh, in the comparison. So currently what is accessible is the 8,000 token window for, uh, GPT

Viktor:

four. This is basically the 10, 10 pages, uh, in the GT four keys and include 100 to 100,000. It's. 120. So it's kind of like 10 times more, basically what you can fit into now into cloud. Yeah,

Mihaly:

so this 8,000 is very good for creative task. For example, coming up with names for our podcast that we did or this short tweets, even LinkedIn post. And also having a gut prompt, which is basically doing a small research on the most important OD and do an actual task. So this 8,000. Uh, token Window is great, but you mentioned if you want to summarize the podcast, then it becomes almost impossible even if you cut it in six parts and feed it into J GPT. That's rolling. Context window will start to cut out the first parts, so it's not good for long, long documents at all. But with Claude 100, uh, 100,000 tokens where I just pasted the transcript in, in, in Claude, and I just, I, I was able to ask several questions and I asked like 20 or 30 questions. What are the most vital parts? What are the most engaging parts? Uh, write link, write me LinkedIn post. And it, and it didn't forget the original prompt, which was still long. It's like 80, 85,000 characters, which is. One, one and a half hour when we, when we speak. So it was very useful for

Viktor:

this use case. In your experience, like, because you mentioned like LinkedIn post or, uh, using the gut prompt and, and these kind of things, how does it the, because yeah, it's like quantity is not everything, right? So Yeah. Obviously it's good that you can fit. More information into, uh, include 100 K, but what is the, um, quality wise, what's the difference between GPT four and, and and include 100 K?

Mihaly:

Okay. I will go into the quality on different, uh, uh, parts. First, uh, when I wanted to generate LinkedIn post chat, GPT is very good with zero shot prompts, so I just said, please write me a LinkedIn post based on this short summary. Yeah. On the other hand, it was harder to get a good LinkedIn post with zero shot prompt from Claude and even with a few shot prompts. So I provided two examples, uh, to Claude and asked to please write a similar, and it was very, it was not that strict on following my instructions.

Viktor:

Yeah, but what, what, what kind of, like, if you had to put a number to it, so on a scale from one to 10, 10 is the perfect, like a master full LinkedIn post, one is like basically garbage. What is GT four compared to closed 100 K, I would say. Okay.

Mihaly:

I, I would say other, from other perspective. So at least I, I think Claude needs two or three more, uh, iterations of prompts and even the end product is not that good that I'm satisfied at all. Maybe I would

Viktor:

say 60 to 60

Mihaly:

to 70%.

Viktor:

Okay. So, so 60 to seven is the maximum you can get out from Claude and GPT four is like, what's the number for that?

Mihaly:

Hmm, 9 19

Viktor:

95. Okay. So it's almost like perfect, right? Okay. That's, that's, that's can, that's can, yeah. That, that can be okay. This is our baseline.

Mihaly:

It is, it's not perfect as a good copywriter would write that, but Yeah. You know, t defined our baseline and

Viktor:

for now, yeah, sure. So it's kind of like, and you, you, I, I guess you can even chain things, right? So it's like you can do the pre-processing on large documents with cloud, right? Get a draft, like, which is cohesive, which is not perfect statistically, right? So it's not something which you are satisfied with as a master marketer, but at least it's like, it's, it's more cohesive. Or you, you could feed everything into one, one row, basically, and then you can basically use GPT four to fine tune it, right? So, because then it's already condensed, it's already short, it's already selected, and once you have like the gist of the LinkedIn posts, which you can generate, then you can go one by one and fix the up GTP GPD four. Does it make sense?

Mihaly:

Yeah, I think you are right. Amfa Advantage comes when you can combine different ai. So for example, when you can, uh, combine ChatGPT and Mid Journey, and when you can combine cloud and ChatGPT, this is where, uh, you can create some Word class, uh,

Viktor:

texts. Yeah, this is extremely important. What you just, uh, covered that if you know the upside and downside of these tools, right? Then you can get the most out of them. And it's, my, my biggest problem is that, uh, people are too obsessed with one single solution. Is one single model is one single prompt, one single whatnot, right? And. It's, it's, it doesn't make too much sense because if you, you understand your tools and you don't just have a hammer, but you have a tool set of different tools and you understand when to use them. You can get world class results. And so the end goal, at least in our case, is not theoretical. It's very practical. We want to get sheet done right? We want to make money, we want to save time, we want to scale things. So we are dealing with business of ai, not the intellectual of ai. So in this case, uh, if you understand these tools, and that's why we use them, we can, we can get more value out of them. And that's what I urge the listeners to do as well, that please don't think that one, just one tool is solved all your problems. It, it shouldn't, there are so many tools and that's why we are covering them. That's why we are doing these deep dives that, you know, the good, the bad, and the ugly of each tools, which we have, and you can use. Okay,

Mihaly:

next one. Naming a podcast. So this is a short text and a creative one, and I think that this is a huge difference. I started with, uh, instructions, several instructions, uh, and also a few short prompts. And Che GPT was way much better from the get-go. It was, it used alliteration without asking them it, the names were more catchy, but also Claude was very close. So, for example, for our podcast, it, it mentioned disruptors Daily, which is something ti mentioned in the past as well. But still, I feel, and, and now it's, it's just a subjective feeling that Claude wasn't that creative.

Viktor:

I have an interesting experience. I'm gonna share it with you. Okay. Okay. So it's like two minutes. Good. Viktor is diverging for two minutes. So again, yeah, again. So in, in 2021, end of 2021, GPT three was released and I was playing with it and I tried to copy my coach. So I basically fed into like example question answers, five of them. And then I asked my question and it provided meaningful answer, helping how to ease the pain of my, uh, baby who has stomach ache. So that was the use case, and it worked wonderfully. It even could copy without, like, without any in specific instruction. It could copy like, uh, double spaces at the end of each sentence. So it was nine out of 10, I would say. So it was almost perfect. It was really good. And then two years pass by, and all these models get fine tuned. On what is a good answer? So if you're asking for an answer, what, what, what is the human feedback on that? What, what makes an answer? An answer? A good answer. So giving advice, hedging the advice and those kind of things. Like, okay, yeah, obviously this is why your tummy can a, uh, hurt and obviously see a doctor. So that's kind of like the structure of it. And what happened is, like two years ago, these models, all the models like GPT three to all the, all the models, they were like a S U V. They, they, you could like off-road with them, right? And in the last two years they worked a lot on like putting, uh, the whole, all the models on the rail. So basically just keep them on the rail. What is meaningful? It's gonna be faster, it's gonna safer, generally it's gonna be better, but now it's extremely hard to instruct them to follow the creative task, which is diverging from the, the common path. So in this case, in a coaching case, answer shouldn't be, uh, giving ad advice. It, it should be like, uh, introspective, it should be more question based and so on and so on. And now I have to be extremely. Explicit about like, okay, please write double spaces after each sentence. Please ask follow up questions, please. And I have to define everything and it's a pain in the ass. And the best I can get is seven out of 10. And even though these models are in general much better, right? It is, it's much, much more valuable. But now they are very narrow. So in a sense they get le get less curative because now answering has so much bias and I guess with LinkedIn as well. So, uh, I guess if you just ask for two years ago, if you provide LinkedIn, uh, posts they, which means that you just provide like five good examples. So these are five good examples of, uh, LinkedIn posts. And please write me a new one. Based on like this information two years ago, I guess it worked much better because it has le had less bias about like what is the good, what is like a mad median? What is the median of the LinkedIn posts, right? Because now if you ask LinkedIn post, it has very much, it's has a lot, lot of bias. If you ask for an answer, it's very biased, right? So it has a lot of bias about like the different modalities of the answers. And that's kind of like the tradeoff, which happened the last two years. And that's got, that's something I just wanted to share. Uh, so if you're frustrated about that, uh, my current solution is be extremely explicit about what you need. And the other hack as well is just try to stay away, away from the biased expression. So in this case, try not to say even, uh, LinkedIn post. So these are good text examples I need, right? So these are good examples. Write me a good, under a good one without mentioning that, that LinkedIn, the same for Twitter. So not, you are not saying like, I need a tweet because it's very biased, but like what is a me median tv, which is like, I want, don't want to say shitty, but in a sense it's shitty that it's if you have a specific goal that in that sense it's shitty. So then just like write like, okay, these are good text I, I need, and please provide me one more, uh, based on this criteria.

Mihaly:

Okay. Let's go back to comparing Chat, g p, t and cloud. So next one is generating prompts for mid journey to generate AI images. Yeah. And what do you think, Viktor? Which would be the better GT or Claude? Or maybe the final verdict will be something

Viktor:

else. I, I think it's GPT four cuz it's kind of like have stronger reasoning CAPA capabilities. So I, my my, my guess is that the 100 K contact spin though is not the same as the include plus, which is kind of like in comparison with GPT four. So the analogy is GPT four and include plus and. GT Instant and GT 100 k k with Chat GT 3.5. So I, I guess maybe comparing, uh, the hundred K code to GT four is not fair because it's almost like comparing 3.5 Chat GT to to G P four.

Mihaly:

Okay. Viktor, I think the final answer, we will surprise you. So I will sh uh, I will share you a few images. If you are, uh, listening to this podcast, I will. Uh, describe what we are, what we can see on the screen, but also you can click on this chapter digest, uh, that come and quickly, uh, find this part in the video. So first I asked, um, uh, mid journey to generate Greek kitchens, and I just, just put image in Greek, nothing else. And what we can see on the picture, there are four images of Greek kitchens with, uh, sea in the background, and it's an inside view. And, uh, again, another four pictures. These are very similar. They are not realistic, but they are not painting. So they are in between somewhere. And I did it four times with the very same prompt, just Greek kitchen. And the resorts were very similar. Again, I, I, I'm showing four pictures of Greek kitchens. Blue, not realistic, not painting, and see in the background. So basically the same. And, and one more of this. And now I asked, uh, uh, clothes with the. With a previously mentioned prompt to, to generate an elaborative prompt, uh, for this Greek kitchen. So for example, imagine, uh, uh, imagine a blue kitchen painted on canvas, uh, from, from a certain perspective. And so, and now we get again. For, for one fan of it iteration, the images are very similar, uh, com uh, compared to the previous one, but now it's fully a painted one. So now the style is different. So why is it interesting? Because now we have a different style, so it's now we, we can test something, for example, think about if we want to create Facebook ads. Now this is something different, but, uh, here comes the next one and, uh, this is, uh, what, uh, again, clothes come up a with, and it said, um, create a, create a picture of four, uh, culturally diverse women, uh, who are having fun in, um, In a cooking class and now it, it was something different. And, and I'm glad that we live in a time when Mid Journey 5.1 is existing because these, the faces that we can see are almost perfect, realistic. And when I, uh, when I tried to create faces in Mid Journey V three, it was very

Viktor:

bad. Yeah. And the, the neat thing what's happening here is like you used. Kind of like chaining as well. So you either a gpt four or CLO to generate prompts for you, which you fed into mid journey. Right? And, uh, yes, I guess this talks to the strength of mid journey.

Mihaly:

I just want to focus on Claude and Gpt. Just one thing that now we, we came up something very, uh, different than before, which is, uh, uh, three women having fun and their, uh, hair is blown. Uh, so it's like a very dynamic image. And also the next one cloth come

Viktor:

up with this. Okay. Just to, just to those who are listening here, it's like what you see on the screen. Previously it was like the same, basically the same kitchen. It's like very Greek style with a scene in the background, very bluish. Uh, it's a blue dominated scenes, right? And then what Claude did is like created a dynamic scene of people enjoying themselves. And then it also came up with. Even more dynamism. So not just smiling, but also the hair is thrust in the air. So the, the, the shot is like, it would cost a lot of money. And I, I imagine that someone is, if someone had to shoot this shot, it would take a lot of tries, a lot of time. And, uh, it's insane that it's possible now to, to have this dynamic setting. And it all was generated with closed. It's insane.

Mihaly:

Yeah. Hiring the talent, booking the studio, uh, having studio lights, uh, multiple, multiple people on the crew. And of course the photographer, I guess at least five or 10 K in the minimum, maybe up to 2030 k uh, a day for creating photos like this. Yeah. Okay. So the next one. Okay, this is something CLA camera feed. These are very, Plain and boring photos of bread and a bucket of flowers on the table, which is not good for the purpose, but just want to show you that, that now, now cry. We started to diverge. Yeah. And, uh, okay, I did, uh, but did the very same with Cha GPT and now it came up feed, uh, uh, this picture, which is olive oils and tomatoes on a plate. And, uh, and some people are picking them. So again, it's a very different approach. On the next page, we can see Greek kitchens, but their style is very different because it's more of, uh, orange and red, um, type. And because J PTI was focusing on the ground, which is terracotta, which is a classic Mediterranean, uh, Flooring building material. Yes. So now if you, maybe if, if, if you're an interior designer and you want to see different creative di directions of, of a kitchen, of a scene for a film or, or something like that, then now it's in different perspective. And the last two, one again, uh, similar problem, but it's, uh, what you can see is a modern Greek kitchen, uh, with more of a modern style. And it's now, it's, it's very photorealistic. And, and the last one is again, a blue kitchen. Uh, but it's again focusing because chichi pt, um, included something about the floor. The picture is focusing on the floor. Uh, so, and, and it's photorealistic and still blue, but it's very different from the previous one.

Viktor:

Yeah. That's just to make, make clear, make clear the whole process. For the listeners, you had a prompt and the pro, the original prompt was what? Greek kitchen. Okay. So that was the original prompt, and then Yes. You kind of like had two different flows. You used clothes, you also used Chat gp, uh, GPT four. Right. And what, what was the prompt there? So how did, and how did you explore Curly and moved away from, uh, just having a Greek kitchen?

Mihaly:

So we have a prompt, which is created by Samsung Mobiles, and this is a long prompt, uh, that starts with like this, Ivan to Ivan you to act as a prompt engineer. You will help me write prompts for an AI generator called Mid Journey. And later it'll be data that please define a camera lens. Please define a perspective, uh, color, color, style, color, references, position elements. And so, so basically you outsource the creative process to the language models like G P

Viktor:

T and Cloud. Yeah, so, so actually that this prompt, uh, was used, uh, to kind of like generate like from Greek kitchen generate something which is more crowd, right? And only this prompt was used. Yes. Yeah. This can be sh uh, shared in the show notes actually, um, uh, is part of also the God prompt plugin, which we have. So if you have got prompt plugin in ChatGPT, so if you are a subscriber, you can use it and what's happening, it realizes that you want to generate an image and it's kind of like instructing ChatGPT to use this prompt to generate better, uh, prompts for mid journey.

Mihaly:

Okay, so my impression about Cloud and ChatGPT for these use cases that, uh, you can get more if you combine them. So you, so you make the same process on both and you feed me journey, uh, the prompt, the, the final prompts that was generated by Chat g, PT and by cloud. And in my case, uh, my goal is to create very different images so I can test them on Facebook with, which will get a higher clickthrough rate. So I think combining them is the unfair advantage here

Viktor:

again. Yeah, that, that, that, that sounds awesome because once again, I just want to highlight. Highlight is that you have to have zero background basically here because you just had Greek kitchen. You copy pasted this prompt, but you don't even have to copy paste because it should just use the gut prompt plugin. Then you just like write Greek kitchen and it's generating for you like variations for mid journey and you can copy the same prompt and using clothes. And it, it's kind of like using a different, uh, creative process, kind of like using another creative person. And once again, you get different results. Awesome. So in, in my mind, those pictures was 10 out of 10, each of them. So they were insane. And you get insane results without having zero background. And only you, only thing you know is like you want to have a Greek kitchen. So that's insane. That's, that's, that's, I think if you understand this, you are much further ahead than anyone in your, in your field. Right.

Mihaly:

Okay. Just to final, just to finish this comparison, just a few technical things that, uh, I think, uh, cloud, uh, is a little bit faster so you will get answers a little bit faster, but the general, the speed is the same as GPT four and uh, and cloud.

Viktor:

Okay. So my impression was that CLO Plus is kind of like the same as GPT four in both terms of, uh, quality is a little bit lagging behind, but kind of like the same. But on speed wise it's the same as GPT four and CLO instant and instant 100 K. Those are similar in speed and quality to 3.5. So yeah, you in, in the instant 100 k, uh, model for you can fit in 100 K tokens, but it's. It has lesser abilities than the closed plus model that's similarly like GPT four and GT 3.5 Turbo. It's like, yeah, turbo is quicker, it's cheaper, but, uh, I mean it's much less capable on logical thinking than, than the GPT four model.

Mihaly:

Okay. Besides these use cases, I also tried writing poems and fiction, but the, but the overall impression is the same. So first one, Claude is not following instructions closely. And one more thing,

Viktor:

Viktor. Sure. So it's, it's kind like the takeaway. So if you just like want to try to summarize what is cloud 100 K good for? The name is even clue instant 100 K. So that's kind of like the equivalent of ChatGPT 3.5 turbo, but with a bigger, uh, contact window. So the similarly, if you use the gut prompt or try to use the gut prompt into GPT 3.5, it won't follow it, it won't be as good with complex logical reasoning and instruction following. So that's the same for code 100 K as well. So it's not that good, uh, with gut prompt, uh, thing. And it's not that good with a fiction writing or poem writing because complexity, when complexity increases. You should use or aim for better or bigger models like Code Plus or GPT four. So that's kinda like the big, big, big takeaway. Okay, so we

Mihaly:

talked about tropics products. Let's have a big picture view of the

Viktor:

company. Yeah, sure. Let me share my screen. I'm also going to describe for those who, who are just listening. So what you see is the business model canvas for Anthropic.com. As as we discussed at beginning, it was founded by AI researchers. So if you look at the key activities, r and d is very important. In the first one and a half years actually, they only did research and development and then they started to commercialize. And since then the other very important, uh, key activities, product market fit search in all the different industries. And if you look at the key resources they have, it's obviously 1.5 billion funding. And uh, also the their 100 K cloud model, but also X open AI people. And, uh, their CEO is VP was VP of Research, uh, at Open ai. So, uh, these people have all the knowledge, all the knowhow, all the proper, uh, uh, frameworks to create something which is useful. And I guess the close one and the case is, is a telltale of that, that they are the ones, uh, the only ones which, uh, who, who provide this, this kind of model already. And, uh, the interesting thing is, uh, they have key partners like Google, and Google was investing, but also they are partnering with Google to provide compute because Google is Google with providing cloud services, right? But they also partnered with Zoom, for example. So Zoom Ventures invested money, uh, just a few, few, uh, months back. And Zoom is good actually for CS as well, right? Actually, zoom has a, a solution for providing communication for customer service agents, and they are going to integrate, uh, Anthropic models to, in one way help the users to get answers more relevant, faster, more personalized answers, and also have the customer service agents to be more productive. So that's, that's neat. And also put it there. I'm, I'm not sure about that. I just put it there that, uh, for example, a company like scale.com, which is aggregating the different AI providers and providing services to enterprise is something, something where they can provide value as well. Like, okay, here's our model and you can sell our model if someone is in need. So, uh, key proposition. So what do they provide? On one hand, they provide safe models. They provide 100 K pages of, uh, 100 K tokens. So basically 120 pages of text and clue the next what they're working on is 10 times better. It's going to be 10 times better than GPT four. So that's kind of like, what is the key value proposition and who are they providing this? So who are the customer segments is education, entertainment, government intelligence, community, legal entrepreneurs. So basically, as I said, they're trying to fund find product market fit and what kind of channels they use. They use Google Cloud. As I said, they use Zoom contract, uh, contact center, which is kind of like the solution Zoom is providing for customer service agents. Uh, and mainly they communicate currently to API documents, uh, their documents. So how there's actually quite good documentation about the, how their, uh, API works and on, on a wide sticky notes, I, I provide additional, uh, channels I think they should use or could use or will use in the future most probably. So for example, One of the big problems is finding for them as well, finding product market fit, but it's also the problem for the clients as well. So they, let's imagine you are, you are having an enterprise company and you don't know how to integrate something. So having a playground examples, right? Or having a quiz where you answer questions about like, okay, what you are building. Then you can get back examples of how others are using the tools, right? So helping people to figure out how to, and what to integrate into, into their product, an AI tool. Could be extremely valuable. And also the how side. So how, uh, Anthropic is helping developers could be YouTube education as well, and integrating with tools, AI tools like long chain and then the similar tools, and also into no code tools as well. So helping no NOCO tools providers to integrate, easily integrate, uh, the aro, uh, model features. So that's kind of like the big overview. And also on the cost side, there's a fixed cost of wages and a variable cost of compute. And that's what we discussed already. That can be extremely. Uh, expensive. And on the revenue side, it's very easy pay per use. So as much as you use, you pay for that. And it's quite similarly price, uh, price to, to open AI and what they don't have, or at least I, I, I wasn't reading about that. Maybe they have it, but most probably they're gonna have having the future. So that's why I put it in invite, sticky note is the foundry dedicated privacy instances. So just, uh, I, I just talked to, I just talked to, uh, a lawyer who's working for international law, uh, law firm. And for example, they paid, I, I guess this, the pricing is starting at $800,000 per year. And then you can get dedicated instances, so in open AI case, right? And why is it good? Because obviously it's privacy pre, uh, preserving, they can use clients data and so on and so on, which is. Paramount and, and, and a must have for a legal company. And the, and this is what, uh, open air is calling Foundry. And this is the, the gist of it is basically dedicated privacy instances. So if they don't provide it, they will provide it most probably, or they should provide it because these big companies, uh, can easily pay, uh, a million dollar a year because it's, it's nothing compared to the additional value it can bring or the additional capabilities it can, uh, unlock. So this is kind of like the big overview of what is the strategic, uh, business model canvas for the company. Great. Thank you, Viktor. Yeah. So, okay. Uh, let's move on. So let, let, let, let's get to the next, uh, part. Like, okay. What, because we already discussed that they're looking for product market fit, and I guess the listeners want to know as well, like, okay. What can I use it now for? Right? And we already, uh, covered like, uh, long form content generation editing and translation, so that just like generating a whole book or edit a long manuscript or, uh, coherent translating whole books, training materials for enterprise as well. So what's, what's, we already discussed that, uh, if you chunk up a big piece and you process it, uh, by one by one, you kind of, kind of lose the coherence between the pieces and it, it's already working. You can already try it. It's quite good, but obviously it has limitations. Like, uh, this 100 K model is based on the instant, uh, model class, which is fast, but less capable cognitively. Uh, it ha it can also be used for extended conversation, role playing. Uh, it's like almost like, uh, With

Mihaly:

just one more thing. What do you think is it possible for, uh, Claude that we feed the most important, most interesting, wild engaging parts of our first eight episodes and it'll write a book for us?

Viktor:

I don't know. We have page book based v we have to try. I, I'm, I mean, the big, big, uh, limitation currently is that we don't have API access only through pool.com. And it's neat that they have 100 K context, but kind of like the, the answer is limited. So if we ask for like, the whole book, I guess it won't fill out the whole 100,000. Token limits, so we should, so, so, so my, my currently our, our real limitation in, into the test is, is just like, we don't have API access, but as soon as we gonna have API access, definitely that's cha something we, we gonna try to just like, try to generate as much stress as possible and see what the output is gonna look like. Okay. So yeah, it's like what you also discussed, analyzing entire research papers, legal documents, summarizing the main points, extracting key details, or even in the coding, uh, scenario, refactoring a large pieces of software code, adding comments and documentation, writing unit tests. And I think that since cognitively it's limited compared to the code plus model or in a sense to GPT four, I think like generating, automatically generating documentation from code. Would be better suited than writing code itself. So, so co code writing is quite complex compared to just like writing documentation about like, okay, what, what's happening? What is this code a piece of code is doing? Because it's, it's quite sequential. It's quite like there is a code, it has to be explained. So, uh, writing automatic documentation is something, uh, which is possible now for big code base, uh, code basis as well. Because if you think about that, there's a big code base, uh, you want to onboard a new developer, right? And someone has to write the, the documentation, right? And it can be outsourced to, to these kind of models. And if you commit to a change, then you can update that documentation as well. So you can have an up-to-date documentation, which is kind of like unheard of. So it's, it's almost impossible to do otherwise. And now it's, it's possible. So that's something I'm, I'm extremely excited about. And also the last, last, uh, part is, um, educational applications. So that's kind of like the same as we discussed with the book one that we have to have a p I access. But if you have API access possible, it's, it's, it's gonna be possible to generate entire course curriculums. Uh, and, and the neat thing is like we can feed into so much data. So it's like we can feed into, uh, textbooks, we can summary of textbooks, summary of, uh, guidelines, summaries of, I don't know. So we can feed in lots of information, uh, and even relevant context as well. Like, okay, who are the, uh, students we are generating this for? So this can be generated even student by student. Basis so you can get your own specific curriculum and someone else is getting their specific curriculum. And since it can have all the, uh, long contacts, that's something if, which I'm excited about. And I guess even if it's like, not right now, but in a year or maximum two, it's gonna be extremely good quality. So even though now it's like limited with this instant model of, of like the, similar to Chat G, pt, 3.5 Turbo, uh, the same thing with CLO instant. Even though it's limited cognitively it's going to be better. And, and, uh, and this is something I'm, I think it's, it's quite strong.

Mihaly:

Okay, what kind of businesses can these structures build on top of

Viktor:

cloud? Okay, so let's, that, that's one of the favorite parts of mine to discuss business ideas, right? And give you, give you some food. Uh, food for thought. So what's the biggest problem with business ideas only? So let's say, give you an example. My big business idea is CV generator, which is helping job seekers to generate first offs, uh, of longer more compelling resumes, which are tailored to the specific job opening. Right. So that's kinda like the, like the idea, but what's, what's the big problem? Well, what's, what, what is my biggest problem is that as a developer or entrepreneur, do you know what you should build or, uh, do you know what exactly will your product be delightful or can you use this description as a, as a compass age day? No, obviously not, because it's quite vague. It's just like, uh, okay, this is the idea of, of, of, of CV generator. But that's, that's my test model. Uh, I became up with, with test model, uh, it's actually solving this problem because with test model, for example, in this case, it's, uh, a testimonial which. Uh, actually following four different things. First, it's AI generated. Why? Because it's more relevant and more personalized. Second, it's a short testimonial. Why? Because it's easy to relate to. And also three, it's a vividly illustrates the pain points being solved so people can understand what you're building for. And fourth, uh, it makes the benefits tangible. So you know, what is the exact results people should get from something. So in this case, like, let's say the first was like, uh, the idea was that like just CV generator. Let's compare it to a testing model. In this case, uh, a test model could be I'm a computer science graduate and was struggling to create resume that stood out to tech companies. Uh, this tool took my basic information and transformed it into a detailed company resume that highlighted my coding project and relevant coursework. I started getting callbacks for interviews almost immediately. So if just like get this short, uh, testimonial, testimonial, you instantly know what you're building for, right? You want to get like basic information. The end goal, the KPI you are measuring is whether people are actually getting, uh, uh, feedback for the companies and so on and so on. So this testimony is for, so it's like similar to

Mihaly:

getting the pains, the gains, and the jobs of customers, but in a more easily understandable way, right? So it's like a testimonial, which a human could say. Uh, but if you are reading it as a developer, it's more easy to understand what is the target audience needs

Viktor:

and what do, what do they like, right? Yeah. And, and if I want to be meta, so I want to provide a test model on test model, then here is the hard truth. Bullshit. Personalized doesn't sell. Real stories do. Test model transformed abstract benefits into tangible testimonials about, about, about solving real pain points. It helps investors to see value, customers feel understood and gives the team a meaningful goal. Test model is our daily reality check and our best CI speech. So that's kind of like the, uh, matter of test mode. Of test mode that, yeah, you understand that it's. A compass for everyone. It's easier to convey, and that's something you can get out each day and it can guide you each day to provide value and, and provide, uh, delight. So this CV generator idea, it's obviously, it's bad, this tech job resume writer, it's tangible, but it can also be academic cv. So all your research and abstract can be fed into it, right? It, it wasn't possible before because the context was limited. Or it can be an executive level resume creator. So if you're seasuite, you want to have an up-to-date profile, but you don't have time, obviously you can shield out a few, few hundred dollars to have everything you did fed into it and extremely personalized up to date, strong cv, right? So, uh, that's something. If you are intrigued about, you can go out and, and, and, and build this tool, but let's move on another idea. Social media management service. So let's give you a testimonial, a quick one. I own a small Italian restaurant in the heart of the bustling city. We have amazing food, but I struggle to get word out on social media. Then I found this service. They started creating posts that captured the magic of our meals and the atmosphere of our place. Suddenly we started getting more likes, shares, and most importantly customers. So that's something you can go out and, and create, uh, social media management for, uh, local restaurants.

Mihaly:

How is it better than GPT? Uh, why is it or GT four as an api? Why, why cloud would be better in this case?

Viktor:

Uh, I think it just, for example, Processing more information. So if you have already reviews, if you have already feedback, if you have already a book about like, where people are writing feedback, you can feed everything into it, right?

Mihaly:

So if you want to create social media posts and you have existing content, which you set testimonials, a website, social media posts in the past, and you can feed a lot of them. You don't have to select them, just feed, just put, just put, and it'll generate new social media posts, right?

Viktor:

Yes, that's right. And but also, like, let, let's put a spin on it. So in the outdoor promotion, so from what's, uh, self published people, uh, their books on social media, you can actually feed their book into it, right? So that's insane. It's like, uh, if you have a, a niche, then uh, you can go for it, right? And, and, and, and serve them. And also e-commerce store manager. So handle social media presence for online stores, selling niche products, uh, so they can put a spin on it. And basically do the social media management service in any niche. So let's move on. Academic essay helper. So let's help students. Uh, a quick test model for this. As an MBA student, I often had to research and write case studies. Each one required a deep dive into company history's financials and strategic decis decisions. It was overwhelming until I found this tool. It helped me to structure my research, organized my thoughts, and present compelling analysis. My case studies went from being a source of stress to source of pride, and I received high praise from my professors for my in-depth analysis and clear presentations. And you may think that this, this is quite a niche, but I mean, a lot of MBA students are paying hundreds of thousands of dollars, uh, just to get this, this education. So this may sound like extreme niche, but this is lucrative if you just focus on them and you can focus on legal, you can focus on medical, you can focus on tech. Cause these people, these, these people who are studying at university for these fields, they already. Invest a lot. They already, not just time, but they invest a lot of money to do it. So cohesively, summarize, several research papers wasn't be possible before. So let's move on. Uh, fiction, cool. Outdoor. So it's like basically suggesting edits, characters, plots, descriptions to helping flesh out, uh, an overall story arc. Uh, for example, sci-fi, right? So that's extremely short test model. I always dreamed of writing a fantasy novel, but struggled with creating a detailed word, word, and plot With the fantasy book assistant, I was able to flesh out my characters plot and word turning my dream into reality. I've just published my first book, so yeah, this, this is something, I guess it's like, uh, also you may think that it's extremely niche, right? Like, yeah, science fiction writing, but if you think about that, just one website, it's called numo.org. Get, it's, it's, it's a nonprofit. It's focusing on only fantasy writing and they're doing 800,000 visitors per month as insane. These people who try to write fantasy, they pour their life into it. They pour so much time and money just to realize their dreams and you can be build a to adjust for them if you're focusing on just, just for them. And I guess it's worth like 20, uh, 10, 20, 30 bucks a month if it's, it's good and it's actually, actually happen, you move forward. But it can also be other like not just uh, sci-fi novels, but it can be a fantasy book assistant as well. Uh, it can be romance nova collaborators. So it basically can bring this to two other field as well. Right. So let's move on. Legal tech company. So a quick testimonial. As a small business owner, legal jargon used to send shivers down my spine. Contract analysis service made it easy. They turned complex contracts into a simple language that I could understand. I felt more confident in my decisions and saved a fortune on legal fees. So this is something which we can be done easily. It can help also, we can put a spin on it, like patent fillings assistance, really estate low advisor for rental agreements and, and, and deeds and those kind of things. So these are also may consider it as niche, right? Because you think like, ah, no, not, not, not. It's, it's just a very niche market. But if you think about that rental market, it's huge. Like real estate market is huge and lots of money is exchanging hand. So yeah, th this can be, uh, revol revolutionized as well. So let's, let's move on, give you another topic. AI blogger. Give you a quick testimonial. I run a blog on the JavaScript libraries, but struggle to keep up with the rapid updates and developments. The JavaScript Library blogger helped me to create in-depth, up-to-date content that my readers love. My blog traffic has tripled. It can be also Crypto Blogger, it can be Micro Biology Blogger. And what's changed with these tools is that actually you can feed into the whole documentation of new, uh, New libraries, right? You can even feed into the code itself. So the, the possibility is really endless and whatever is picking your interest, you can dig deep and provide a tool which, uh, which is helpful. But also news reports, if you, if you think, if you stay on this train of thought, that creating news reports like a local community news reporter, right. Just a quick testimonial model. As a local journalist, I was overwhelmed with the number of stories in my community that needed coverage. The local community news reporter helped me write comprehensive news reports quickly, allowing me to cover more stories and keep my community informed. So that's 1, 1, 1 again, it's like, uh, lots of people think that like local community, uh, news is, is something which is niche, but it's quite big, big market, uh, uh, all over the world. So that could be useful as well. Or product descriptions as well. Uh, just like. Writing Kickstarter, project descriptions, luxury real estate listing, describe Describers, uh, and those kind of things. Uh, it's, it's easy to put a spin on it and create these tools. Also, just staying on the language and, and tutoring side, it's, uh, let's say a quick, we'll do that again. I'm a business professional, frequently traveling to Spain. I need to improve my Spanish, but traditional language courses were too general. The Spanish tutor for business professionals gave me specific language practice for my business meetings and negotiations. My confidence in doing business in Spain has greatly, greatly improved. So that's one, again, it's like the big languages outside English, uh, which can be targeted and, and niched down, like just for example, Spanish for business, uh, professionals, but it can be Mandarin to, for travelers or French tutor for students as well. So, If you're an entrepreneur, this is the best time to be alive. And the same goes of for STEM assistant. So it's like middle school math assistant, high school physics helper, or college biology study study aid. That's the same, same, same thing basically. And if we go to the enterprise, so what we cover now is basically what entrepreneurs can take on, but what can enterprise. Do or what can be done for the enterprise as as a client. So digital therapist platform. So I struggled, so that's the testimonial. I struggled with insomnia for years. This platform, sleep therapy advisor helped me understand the route of my sleep issues and provided practical relaxation techniques. I'm finally getting full nights of sleep. Again, it can be spinned, uh, to stress management as well, or PTs D counselor. So basically providing mental aid for your, uh, employees. One of the best ROI thing ever. We're gonna get into what kind of, uh, pers you can get if you've worked for Atropic. But one of the best ones in my mind is they provide $500 a month for valance and they understand. So the company understands if you're in good and health, then you are doing better job and better output. So, but also, but be covered. Code refactoring service. So Legacy Code modernizer, Python, refactoring Service Game Code Optimizer. This is something which is quite common even with the legacy code bases, thousands of people writing some code in, I don't know, a couple or four ton, and it has to be RRI written and with these kind of tools that the token bin was big, it's finally possible. Uh, corporate training is something with, with cybersecurity sales training course can be fine tuned to each and every company because you can feed into the company, you can feed the sales script and you can generate something which is specific for a company and you can sell, sell it to them. And you can also sell the services itself. So people are going through the course, uh, themselves. Market research analyzes. This is something we also covered and we do as well. We do lots of research and what if now? All the research can be fed into it and it can make cohesive summaries. Coding tutor as well. So like Swift Tutor for iOS development, Python tutor for DA data scientists, react native tutor for mobile app developers. So the possibilities is really endless, I guess. And uh, uh, the final thing I just want to say, uh, is that Zoom, as I said, already invested and they integrating them into the Zoom contact center. So this is not, uh, out of reality, which we cover now. And uh, also notion AI is built on top of Atropic models as well. So, and they also have a junior learning platform company, which is basically helping coaching students and it's, they're covering different subjects, math. Uh, critical in reading and, and those and, and these kind of things. And it's already done, but it's quite general. So if you feel inspired, uh, I urge you to just like create something which is valuable for your niche and you can niche down, like you can create of value. Okay? And

Mihaly:

we also did a community around Anthropic, and we found that, that Twitter is very strong with a hundred thousand followers. And, and when they announced their most recent, uh, cloud model, then it reached more than 2 million people. So it's very strong, but also, but I didn't find any dedicated Facebook groups, uh, who are focusing on cloud itself. While there are several FA Facebook groups, uh, fo focusing on mid journey and, and GPT as well.

Viktor:

I guess the main reason for that is it's already closed beta, the API access. So developers can't really access it now. The only way to access it either way through po.com or through Slack. So you can add their Slack bo uh, bot and have a conversation. But that gets kind of like niche. So it's, uh, Still, if you move on to the next, uh, the, the final part of this, uh, structure through recruitment, and we looked at what kind of people they are hiring. It's not surprising that they are researchy, uh, institution first and foremost. So they hire a lot of researchers, uh, computer scientists, and so on and so on. And, uh, now finally it shows that they're working on productizing what they're building. So they are looking for product people as well. And so it's, it's kind of like in the process of, uh, gradually releasing what they build to the community. And they're the very first step, basically now. So that's, that may be the biggest reason why the community is not that strong. And it also says, and shows that most probably if you are into, uh, community building, if you are into developer edu education, if you're into, uh, solution engineering for clients, they're gonna need it. So even though if they may not have all these positions open now, they're gonna need it for sure. So, and what does it mean? It's, it's, that's kind of like one of my biggest pet pet peeves. It's like, if you like coding and you want to quickly understand this field, like solution engineering is one of the best field you can be in because your day-to-day job is basically talking to clients, solving their problems, seeing how they solve problems, seeing how they're struggling, what can be done, and what can, what is analog. And you're helping clients to basically create value. And it's kind of like the fast lane of learning how to apply AI to bus real business settings. So, uh, as I said, uh, obviously, uh, this $500 per month for VanNess, uh, steepens is, is something, uh, which is, which I didn't, uh, come across before. So I big kudos to them for that. Uh, but also they, they, they cover the usual, so they give equity, they sponsor green card if you need it. So it's kind of like they have all the products, uh, others are pro, uh, providing as well.

Mihaly:

And I also found they are hiring recruiters, so it means they're growing

Viktor:

fast. Yeah, yeah, yeah. And then they need, need operators as well. And, uh, I didn't really see explicitly, but I guess they're gonna need security, uh, expertise as well. Uh, and as I said, more people with product expertise, UX marketing, psychology. What we already discussed with assembly AI is they, they're focusing on onboarding people to the, to the api. So the obvious, the Anthropic is that the API first company as well. So they are going to need people who are dedicatedly optimizing the VE flow for developers. And also dog fooding is my middle name. That's my pet pee. And what is dog fooding Viktor? So, so dog fooding again, is like, uh, if you for walk the path yourself. So you are, in this case specifically, it means if they require every, everyone in the company to build something on top of their APIs and nobody the expertise. So it's not just engineers and researchers, but also managers, also customer service people, and everyone, every single person should build something on top of the api. If they require that, so basically use their own product, what would it unlock? And that would be the single most impactful decision they could make. Because then every single issue is just bubbling up. If something is not working, it's not clear, uh, they or they are not, no, not enough tutorials for beginners. It, it, it's gonna bubble up. So all, all these issues are going to bubble up and it's going to continuously bubble up. And the needs like, okay, I try to use this, for example, for sci-fi, uh, writing, right? And it's not good for that. And it turns out it's gonna, it, it's going, going to turn out because someone is trying to do that. So the research and development, uh, is covered already and defining, uh, good product market fit and also. Providing an easy to onboard experience would be skyrocketed if they're just like requiring dog fooding from every single person in the company. So that, that would be the biggest, most impactful decision they could make.

Mihaly:

Okay. Thank you very much Viktor. And thank you very much Disruptors for listening to us for this long time. You will find every important link and information the show notes. Please go to this app digest.com. Viktor,

Viktor:

that's a wrap. Yeah, that's a wrap. Thank you for listening. Um, see you guys. Thank you.

Mihaly:

Bye.