The 6 AI Engineering Patterns, come build with Greg live: Starts Jan 6th, 2025
Leverage
AI Show & Tell

Nicole, CEO Of Headstart

AI-Native Development, Cloud Project Power User, Six-Figure AI Projects, Scaling with AI Agents

Nicole, CEO of Headstart, an applied AI services company, shares her journey of building a successful AI-native business in just two years. Starting solo and leveraging AI for coding, she quickly scaled to a team of four, charging high six-figure project fees. The interview dives deep into her unique approach to AI implementation, leveraging Cloud Projects, and building internal tools for efficiency. Nicole emphasizes the importance of communication, product thinking, and a "hard problems first" mentality as key drivers of success. She also outlines ambitious growth plans, aiming for a tenfold revenue increase and significant team expansion. The discussion reveals how Headstart uses AI not just as a tool, but as a core element of its business strategy, enabling rapid growth and high profitability.

Insights

  • AI-Native Approach: Headstart uses AI to write code for AI implementations, creating a highly efficient and scalable workflow. This allows for faster project delivery and the ability to handle multiple projects concurrently.
  • Proprietary Data Structures: Nicole highlights that structuring data is as crucial as the data itself. A well-defined data structure enables seamless integration and maximizes the value of the data.
  • Focus on Product Thinking: Headstart goes beyond just coding; they act as thought partners, helping clients refine their product vision and architecture for scalability.
  • Prioritize Hard Problems: Tackling challenging projects is seen as a strategic advantage, leading to valuable learning experiences and higher client satisfaction. This approach also commands premium pricing.
  • Internal Tooling for Efficiency: Headstart invests in developing internal tools, like an AI-powered code review agent and a prompt optimizer, to continuously improve efficiency and quality.
  • Communication is Key: Strong communication skills, both written and verbal, are paramount for effective collaboration with clients and leveraging AI coding tools.
  • Network Effect of Knowledge: Headstart maintains a GitHub Wiki to document best practices and code snippets, which are then fed back into the AI, creating a continuous learning loop.
  • Cloud Projects for Contextual Coding: Nicole strongly advocates for using Cloud Projects over other AI coding platforms due to its explicit context management and superior code output quality.
  • Trusting the AI: Nicole emphasizes the importance of trusting the AI's capabilities, allowing for faster development cycles and greater reliance on generated code.
  • AI for Prompt Optimization: Headstart has developed an AI tool to refine prompts for AI implementations, ensuring higher accuracy and effectiveness.
  • AI-Powered Code Review Agent: An AI agent built using Claude Computer Use assists with code review and automates tasks like creating pull requests. This functions like a junior engineer on the team.
  • Focus on Client NPS: Client satisfaction is a top priority. Headstart prioritizes delivering high-quality work and building strong relationships.
  • AI-Driven Growth Strategy: Headstart's growth strategy relies heavily on leveraging AI for efficiency, enabling them to scale revenue without proportionally increasing headcount.

Transcript

00:00:00 Nicole: We'll deliver like an entire application build in like from scratch in 4 weeks, like fully functional. We're not afraid of anything. We try to take on the hardest projects. We believe that access to hard problems is like a proprietary business value for us. I'm taking like swaths of code from cloud projects when I'm working on client projects. What I love, love, love about cloud projects is that you can share them among your team. One engineer told me he's like you trust it much much more than I would ever trust it.

00:00:42 Greg: That's Nicole, founder and CEO of Headstart, an applied AI services business. She started the company right at the launch of ChatGPT, and now she's charging a minimum of 6 figures per project with only 4 employees. In this interview, Nicole shares the homegrown automations that keep their revenue per employee high, how she uses Cloud Projects to run her entire business, and the playbook that she's using to scale her business from 3 to 14 employees. Let's jump into it. So, Nicole, you and I first got introduced by a mutual friend, and he says, Greg, I met this person. They just started their own agency, and they're absolutely crushing it.

00:01:21 Greg: Can you tell me more about what you have going on at Head Start and what y'all are doing?

00:01:25 Nicole: Absolutely. So I started the company just over 2 years now. I left my job in October of 2022. I actually created the business in December, and between that, chat gbt launched to everyone, November 1, 2022. So it was quite lucky timing in all of it. I had actually had early access to the OpenAI models. And so I had been using GPT 3 before it was a thing, and I knew I wanted to start my own business, but I didn't know how to go about it. And so I started consulting coding in line with the chat gpt launch. So I was using chat gpt 3.5 to write code for me before everyone was doing it. And I remember to this day people being like, oh, chat gpt 3.5, it's not good at writing code.

00:02:11 Nicole: And I had a consulting project that I was on that was Ruby on Rails. I had never done Ruby before, and so it was writing all my Ruby code. And I'm like, well, it's good at writing Ruby code, so I don't know if it's something about the way that I'm using it or what's different. And then by the time GPD 4 launched in April of 2023, everything had changed. And so, obviously, since then, we've gotten almost weekly updates from Anthropic and OpenAI dropping things that we use to code. And so my business, Headstart, is an AI native applied AI services firm. And so it's very meta in terms of, like, how we use AI. So we use AI to write code that implements the AI for our clients.

00:02:56 Nicole: So very much just AI. Yeah. Yeah. Yeah.

00:03:00 Greg: Yeah. And so did you start head start with AI in mind, or was it no, we're we're gonna do coding services and then AI came around?

00:03:09 Nicole: I didn't I kind of started the business more as just I was consulting myself, and what I knew how to do was code. And so it was more around, like, what can I sell in general that I can just bootstrap a business? And so that was really important to me was not necessarily raising capital, but figuring out how to do it on my own. The AI piece came in line with it. I started using the AI to write the code before I started implementing the AI. At that time, every company was like, I want a chatbot. I want a brag implementation. I was like, I need to learn this, and I was very motivated by the money incentivized to learn it if I were to get paid to learn it.

00:03:45 Nicole: And so I basically started implementing it for people knowing that they would pay for it and then building the business up that way.

00:03:52 Greg: For sure. So tell me about what what are the bread and butter projects that y'all take on right now? And, like, what are your clients asking you for?

00:03:58 Nicole: Yeah. So it's really interesting. It shifted over time. The 1st year we were in business, it was a lot of chat bots and rag implementations. We actually still do a lot of those, but there's a bit of a different flavor. And then the more interesting projects, in my opinion, are when the client has actually tried to implement the AI themselves, and they're not seeing as strong results out of it, or they just want it to do more than it's able to do with the implementation that they have. And so we'll actually come in and either redesign an implementation or do something new and get very, very strong results with the new implementation.

00:04:31 Nicole: So we do a lot of unstructured data to structured data, which is obviously an incredibly large bucket. Sure. But document processing, a lot of web scraping, anything that can be kind of data mapped, so one structure to another. We think in this world, proprietary data structures are as valuable as the proprietary data itself when you think about, like, how you put things together. We do a lot of that. Just honestly, the technology is so far ahead of everyone using the technology that I think it's gonna be years years before people are actually, like, caught up to the usage of it, which is

00:05:08 Greg: I get the proprietary data side. But what do you mean by proprietary data structure? And why do you think that is so important?

00:05:15 Nicole: So that's interesting. I think there's so much data out there and there's so much data available. And the advent of all the LLMs, like, all the data has been now processed by the LLMs, and the LLMs can generate data too as much as you need it. And so the structuring is interesting because, like, as you think about software products and as you think about value within it, like, something being structured either relationally or however else you wanna structure has inherent value to it. And so if you put the thought process behind what should that structure be, and then you can access the data from wherever it is, that becomes very, very valuable, if that makes sense.

00:05:53 Greg: Sort of. I would I would love an example of those. Like, what what do you mean by that?

00:05:56 Nicole: Like, if you think about so every API, right, every company that possibly has an API is a certain structure, and you think about, like, lately, a lot of software that's built is integration, so you're integrating one thing to another. And that kind of API wraps the proprietary data structure in the first place. If you're then trying to, like, plug into multiple integrations, there's a mapping that you would maybe do that's like, this is a common data structure between these integrations. That becomes valuable too because then you can kind of, like, pipe things to other things much easier.

00:06:31 Greg: Sure. Sure. Sure.

00:06:31 Nicole: Yeah. And so, like, as you think about a company that's, like, leveraging data to make money in whatever sort of way, and maybe they're taking they're inputting the data from all these third party sources, doing that mapping into their structure is almost as important as the data.

00:06:47 Greg: Sure. Yeah. I mean, it's the normalization process to, like, actually make Exactly.

00:06:50 Nicole: It's normalization as a yes. But normalization is so much easier with the LMS.

00:06:55 Greg: Yeah. Yeah. Yeah. Yeah. For sure. Okay. So you started Head Start 2 years ago. What did the team look like now and or what did the team look like then and what's it look like now?

00:07:04 Nicole: Yeah. That's a great question. So it was just me for a year and a half. I was able to How

00:07:09 Greg: did you find how did you find clients? Like, how did you get your book?

00:07:12 Nicole: All inbound referrals. So it was all people that were like, I either need the software help or AI help. Part of the reason that the book was so broad was because I had no restrictions on technology because I could do anything with AI. So it wasn't like, oh, I'm just a Python engineer or I'm just a JavaScript engineer, I can only take those projects. I could take anything because I could literally do anything because the AI was coding anyway. So, you know, that helped make it, like, very, very broad, in terms of what projects I could take on, and then I was able to scale just me without hiring anyone because of the AI.

00:07:46 Nicole: Because I wasn't writing the code, the AI is writing all the code, so all of a sudden instead of doing, like, one project, you can do 5 projects at the same time. And Yeah. Yeah. You're like, okay, I don't really need employees if I have AI. Obviously, we've shifted away from that, and we have employees now, but

00:08:03 Greg: Sure.

00:08:03 Nicole: That's still, like, the general way that we want to scale the company is in a very, very AI native way, which kinda goes against the way that you would think about, like, a traditional company of just grow headcount to grow the size.

00:08:18 Greg: Totally. Well, Nicole, I don't think you're giving yourself as much credit as you should be because there are bajillions of people sitting in the rooms messing around on cursor, like, trying to build stuff. But, yeah, you've actually built a company and you actually have real clients. And so what is your unique advantage? Or, like, how would you assess your skill set on top of just AI coding for you? Like, what else do you bring to the table for that?

00:08:38 Nicole: Yeah. That's really interesting. So I I don't think that what we do is, like what we're doing with the AI is not what's proprietary about the business. Part of the reason I'm happy to, like, hop on the phone and show everyone exactly how we're doing things is I don't think that that is exactly the mode that we have in the business. It's incredibly powerful. Everyone should be doing it. It's incredibly valuable for our clients, and you get a huge return on it. But I don't think it's, like, the most important thing for the business. Ultimately, it's, like, communication, how you get things done. The whole thing's hard work, and I think that's probably the biggest blocker is, like, even coding with AI to get projects done.

00:09:19 Nicole: It's hard work. It's communication. It's figuring out how the product should work. It's getting into the nitty gritty details. Like, the AI does a lot, but, ultimately, so far, it only does the coding, which can reduce your time significantly, but you still have to do everything else.

00:09:35 Greg: Well, so what what's in that everything else bucket that you think is most important?

00:09:39 Nicole: I'd say product is a huge part of it. Right? Like, how should it work and being a thought partner on that. Part of the reason clients like to work with us is because we actually help them figure out how the product should work versus just implementing it exactly how they want it, like, to work. So a lot of clients, you know, if you just know what you want, you can go to any developer and get them to build it for you. But we do more of the thought partnership and, like, well, how should it actually work? What is the best thing to do? And, like, what different things can we build around that? And then the architecting of it for scalability.

00:10:09 Nicole: Like, that's the other piece is, like Sure. With AI doing all the coding, we can focus on the architecture of the solution and making sure it's the right solution that scales and everything's, like, just well done and in components with the design system and everything that you possibly need because the AI is gonna do it all that way anyway.

00:10:29 Greg: For sure. So For sure. I

00:10:30 Nicole: think that's the other benefit is you actually get all these other pieces. But, ultimately, it's kinda communication and delivering the end result, and a lot of that is working with a client to figure out what they actually want.

00:10:42 Greg: Yeah. Yeah. Yeah. Which is the hard part a lot of the time. So you have 3 people on the team right now. Right?

00:10:47 Nicole: Yes. Three people, a cofounder, an engineer, and a second engineer starting week after Thanksgiving.

00:10:53 Greg: Nice. Congratulations.

00:10:54 Nicole: Thank you.

00:10:55 Greg: So soon to be 4 people. And so, 2 engineers or I guess what what are their skill sets look like? So what what skill set are you doing versus your cofounder versus the 2 engineers?

00:11:04 Nicole: Yeah. That's a really good question because we're also hiring and we're looking for a very different skill set than is like a traditional engineer skill set that you would look for, Between me and my cofounder, who's also my brother, I'm the technical one, he's the nontechnical, and so we honestly, like, have very complimentary skill sets, which is amazing for the business. I do all the technical and a little bit of the sales and client comms. He also does sales, and he does all the operational. So he does all the internal operations. He does a lot of product work too. So product scoping is a huge part of our business because we charge flat fee, and so we have to get all the scope up front in order to do that.

00:11:41 Nicole: He does that. On the engineering front, we look for product engineers. So we look for product builders. We look communication is such a massive thing. I think communication is always important as an engineer, but lately when we're looking to hire, we're actually looking for written communication skills as well as verbal communication skills because when you're using the AI to build, you actually have to use language incredibly well

00:12:05 Greg: to do the

00:12:06 Nicole: code that you're looking for, and so we need people that can like look at code and understand whether or not it's good, but they actually don't need the skill set to write the good code.

00:12:18 Greg: Sure. Sure. Yeah. It totally does. Alright. So 4 people, how many clients? Right? Or, like, what is the what is the portfolio look like?

00:12:26 Nicole: Yeah. So we typically take on, like, 4 to 5 at once is our kind of typical workload, but that's growing as we grow engineers and the size and kind of timeline of the clients differs dramatically. So we'll typically do, like, a pilot project with a client that's anywhere from 2 weeks to 2 months, and then the client takes the results of that product. They go and user test it. They use it for whatever they would like to if it's an internal product. Like, their entire team's using it, and they figure out what more that they want In that off time, we're working on other clients, and then they'll come back for another project.

00:13:01 Nicole: So we kind of have clients coming in and out. So it's not contractually recurring revenue, but it's reoccurring revenue because they're typically having a good experience, and they wanna then build more. Our best client is someone that's like, I wanna build everything. And

00:13:15 Greg: so you

00:13:17 Nicole: slowly break off chunks and build it all for them.

00:13:20 Greg: Yeah. Yeah.

00:13:21 Nicole: Yeah. Which is really fun, but we're figuring out exactly what is kind of a client per engineer thing look like and how to scale that. Typically, I'm still doing all the client comms, which is really interesting, and we are keeping the engineers more on product work and building out internal tooling to support us handling more work at once.

00:13:40 Greg: Sure. Sure. Sure. How long is the average client engagement that you have?

00:13:46 Nicole: Average, maybe, like, 4 to 6 weeks. They're typically short. We deliver quickly, and so we kinda charge a premium in order to guarantee a fast delivery. And so we work with people that want something done yesterday, essentially, and they know exactly what they want and they're ready.

00:14:05 Greg: That's awesome. That's so nice.

00:14:07 Nicole: Built as quickly as possible.

00:14:08 Greg: Okay.

00:14:09 Nicole: We are a bootstrap business, and it's been really, really incredible to hire out a team of US based, like, highly paid engineers.

00:14:18 Greg: And you're all local to New York. Right?

00:14:20 Nicole: Yeah. Local to New York. It's been really incredible to sometimes I don't even believe it. You're like, I'm running a real business now. Every step is that I'm running a real business from, honestly, like, creating the LLC all the way to now we're looking at a bigger office and more employees. So

00:14:39 Greg: That's why I'm thinking more in real time. How many people are you looking to hire?

00:14:44 Nicole: So we are looking to hire 5 more for January and then five additional for May. We have a lot of cloud demand. So

00:14:53 Greg: No joke. So you're gonna go from 3 right now, almost 4, all the way up to 14 in 7 months.

00:15:00 Nicole: Yeah. So Yeah. Really, really crazy. We're being very, like, particular about hiring just in the skill set that we're looking for because we're doing such AI native things. It's a little bit different. So we are honestly still figuring out and defining what that exact hiring profile looks like as Sure. We get into it. We're very, very interested for May in new grads. So people with computer science backgrounds who are actually not trained in the typical way of coding so that we can train them in our way of coding.

00:15:33 Greg: Nice. Yeah. That's very cool. Random question, but out of curiosity, how do you change your language when you're talking about AI with clients versus, like, technical folks? Like, what is the client tongue you put on to make it make sense to them?

00:15:46 Nicole: I talk very technical. Sometimes my brother has to, like, pull me out of it because it gets a little bit too much in the weeds if I'm getting, like, really into it. But we we're very technical with our clients. I think, ultimately, everything does have an explanation, and, like, people are curious about it, and we're we're kind of, like, willing to explain as much as anyone's willing to listen and hear about. We find that our most engaged clients actually really appreciate it because they even if they're not engineers themselves are, like, interested in learning about how the technology can work and how you can use it in order to really understand that, understanding some of, like, the baseline of it is helpful.

00:16:26 Greg: Totally. And where do you like to play with regards to minimum contract size? Like what what what what starts the conversation with you?

00:16:33 Nicole: Yeah. So it's

00:16:34 Greg: Hey. So after the interview, Nicole said, hey, maybe we don't actually wanna share the exact number of their minimum contract value because it changes all the time and no no reason to stay get in the ground. But what they did say is that they're charging mid 6 figures on short term contracts. That's 100 of 1,000 of dollars, which is very cool to hear. And they're also planning on 10 x ing their revenue in 2025 as they get more enterprise contracts, which I just think is absolutely amazing. So let's go back to Nicole and hear more about Head Start.

00:17:01 Nicole: We've had to raise it due to demand. When I first started the business, the first flat fee I did, I think, was 10 k. And so we just started raising it according to demand. And what's cool about that too is, like, we get a lot done for that amount. Like, we're getting so much more done for that amount than I think anyone else would.

00:17:20 Greg: What do you mean so what do you mean so much more done?

00:17:23 Nicole: Like, we'll deliver, like, an entire application build in, like, you know, from scratch in 4 weeks, like, fully functional, completely ready production ready on prem.

00:17:32 Greg: And so one thing that really stood out to me when we were meeting in New York, you're like, Greg, the more projects we do, the the quicker each next project gets because we learn from the last one and we have templates. And so how does that work with regards to it's almost like your or, institutional knowledge that you're building up that makes the next one quicker. So what does that look

00:17:50 Nicole: like? Yeah. So, we have a Wiki in GitHub, and we're trying to build that out. I'm sure you've heard this all with LLMs is, like, garbage in garbage out. And so if you're putting, like, shitty stuff into it, you're not gonna get good output. But if you put really good input into it, you're gonna get great output. And so we're doing similar things over and over again. I'm usually using using the LLM to generate the first version of what we're doing anyway. So I have it generate a wiki alongside that. As I go through its instructions, I'm actually fixing it. So you're almost, like, kinda fine tuning it yourself.

00:18:19 Nicole: What do you think about, like, okay. That wasn't the exact output I wanted, but this is save it into our internal Wiki, and then we have it next time that we need to do it to either use from our Wiki, but better yet, put it into the l m. And so this is how we know to do it, and we need to do this thing on top of it. So it's very network effecty of, like, the more work that we get, the more we can build out the internal knowledge, the more the entire team has access to the internal knowledge, the more that we can feed that internal knowledge into the OLM and get even better results out.

00:18:50 Greg: Okay. Wait. So this sounds super interesting. I I gotta dig into this one. So it sounds like and I wanna speak super tactically here so that we can get, like, to the to the crux of it. So you have a a GitHub Wiki and we're literally talking about a text file that says, here are the conventions and the technologies that we like to use and then the style and order of them. And as you do more projects, that gets more and more refined because like as you said, it's almost like fine tuning, but it's not because it's more prompting more than anything. Then for the next project, you'll copy that Wiki and basically give that as context to the l m.

00:19:20 Greg: Right?

00:19:21 Nicole: Yeah. Yeah. So it's in all markdown files, essentially, and then we use cloud projects for everything. So, basically, whatever relevant markdown is relevant to the new project, you just pop that in alongside the code, and then it has that context to go off.

00:19:36 Greg: I wanna keep on going down this, but why cloud projects over cursor?

00:19:39 Nicole: Oh, I so I love cloud projects. I love anthropic and cloud in general. Like, I they should pay me, they don't. I think it's, like, the greatest thing in the world. I did a case study with them, and they have a quote from me in the article saying, like, anyone that doesn't use it is dumb or something like that.

00:19:54 Greg: Nice. Nice.

00:19:54 Nicole: Which is, like, so great. I I don't know exactly what Cursor is doing on the back end and stuff. I don't know what their system prompts are. I don't know enough about it to know exactly when they're including the context into the prompts. I know you do the command k from the file. We do use cursor. I have very niche use cases that I use cursor for that, like, my engineers laugh at me. Cloud projects, what I like about it is it's so explicitly contained. So you create a new project.

00:20:23 Greg: Sure.

00:20:24 Nicole: You write the description, you write you put in whatever you want into the content, and so I know for every prompt that's going into the project exactly what context is available. And I think part of all with cursor is, like, I actually don't know depending on where I'm prompting in the tool, what context it's pulling from or if it's the right context. I was talking about this with engineers the other day because people always say, oh, well, like, if it's a really big code base, like, how do you fit it in? And we work in a lot of really big code bases. We work with our enterprise clients, and the thing about it is, like, as an engineer, you're not looking through every file in

00:21:03 Greg: a really big code base when you're going to change code.

00:21:05 Nicole: Like, you couldn't. Like, you never would. And so whatever files that you go look for in the file structure to then figure out what you're gonna code next, whether it's a new feature or you're going to edit something, those same files you include in the cloud project, and then the l l m can do everything.

00:21:22 Greg: So it's Basically, you're doing a lot of copying and pasting too?

00:21:25 Nicole: I do a lot of copying and pasting. Literally, like, my entire workflow is just like copy paste, copy paste, copy paste.

00:21:31 Greg: Yeah. Yeah. Yeah. Yeah. For sure. Okay. So that's on that side. You don't have a sample of those system instructions, do you? Can we check those out?

00:21:40 Nicole: Yeah. So we can. I have a test project pulled up that we can kinda, like, look at how we're doing it, and I'll talk more about a couple other things with Cloud projects. So yeah. So this is our projects at head start. You can see there's your projects and then there's all projects. What I love, love, love about cloud projects is that you can share that amongst your team. So we pay obviously for an enterprise account. You get full enterprise security baked in, so you don't really have to worry about, like, the fact that we do upload code into here. I have this AI interview with Greg Project. So what I've done here is I have pre uploaded an entire code base that is a test code base that we're we're working on on.

00:22:18 Greg: And is that the file tree structure in the name itself?

00:22:21 Nicole: Yes. So this I do so that the AI understands where all the files files are coming from. A friend actually used the AI to write this script for me. It's a bash script that I just run-in the root of my repository that says, like, flatten the repository. I can show you. I think I had just done it in here. Yeah. So it's literally like a flattened repo script, and Uh-huh. Run it, And then you can see, lsdot. That I have this flattened one. And so that just allows us like, if you go to Claude and you actually upload here, you can see how if it's not flattened, you can't, like, actually upload everything.

00:23:04 Greg: Yeah. Yeah. Yeah.

00:23:05 Nicole: You go into here to flatten it. And then the other nice thing is when it's flattened with the file path, if you then ask Claude, like, hey. I need to create a new view. Can you a new view, let's say, for settings for the user, can you tell me where I should put the new files and then give me the content of the files? It should look like a standard settings screen. I also typo, like, nonstop dot ai. It doesn't matter at all. It's, like, my favorite.

00:23:45 Greg: It's so nice. It's so nice that you can

00:23:46 Nicole: You don't have to have anything be perfect. And so this is kind of essentially what we do. So we work with clients that have a ton of designs. We work with clients that have no designs. And so if they have no designs, like, it'll literally just, like, come up with everything itself, and you can see here it gives you the file path. So then what I do is I just go back into the repo.

00:24:08 Greg: You can

00:24:08 Nicole: see in chat. Where did it want it? Chat settings page dot tsx, and so we have ID, and instead, we literally do settings. And maybe, like, if people know how to use Cursor to better create your files, I would love to do that. I don't know how to do that. So I'm always gonna

00:24:32 Greg: Well, you you can do it. It's just kinda still the wild west because it takes so many liberties and assumptions and just blast it all out that you can go deep down the rabbit hole. It's tough to get yourself out of it sometimes.

00:24:41 Nicole: Really? Okay. So, yeah, what I do then is I literally just copy this and I paste it there, and I don't even really read the code all the time. I do when we're reviewing it for clients, but when I'm just kind of, like, writing it like this. I think this Sure. Told me to rename it. Settings, nav, and then you put that in there. And I do like, I'll check to make sure, generally, it looks right, like, no TypeScript error or something like that. Okay. So it says create the files. I already did that. And then it says that in the app sidebar, I need to add this. So we gotta go into the app sidebar and see. So I just, like, do this.

00:25:26 Nicole: And so you don't have to write any of the code, but you do have to figure out let's see. Sure. Bar user nav. This one, this here, and then you just have to make sure it gets import. And then you should be good. And let's see if that even worked.

00:25:47 Greg: Here it is. Settings.

00:25:50 Nicole: And then we just have to figure out what Yeah.

00:25:52 Greg: You gotta go figure out, like, put the proper page in there. Yeah.

00:25:55 Nicole: Oh, yeah. Because it's under it's just the routing that it actually got wrong because it said chat dot settings, and then we didn't actually put it in the chat place correctly. So let's

00:26:14 Greg: see. If we do this, it should work, and then we just have to hook up correctly. Yeah. So this is

00:26:19 Nicole: the settings page it came up with.

00:26:20 Greg: Look at that. It's so wild.

00:26:22 Nicole: Yeah. It's crazy. And then what's great is when you have Figma designs, essentially what I do is you just dump, like, here are the 5 Figma screens that I need. 1 by 1 do the screens, and then I'll get, like, 1st screen over, second screen over, 3rd screen over. And then what I typically do is I'll take all of the code that it gave me. So I'll go through the 5 screens here, and I'll copy paste them back, and I'll be like, here's one screen. I'm just gonna give it these random screens. Here's a second screen. Let's see.

00:26:56 Greg: It's we can be so sloppy with our code in LLMs. It's wild. And by sloppy, I just mean, like, you're not even giving context. You're just giving the code. Yeah. I just give it like this, and

00:27:04 Nicole: I'll be like, between all of these screens, is there anything I should generalize out into components? Like, something like that.

00:27:14 Greg: Uh-huh.

00:27:15 Nicole: And the screens I gave are kind of random, so

00:27:18 Greg: I don't know if you

00:27:18 Nicole: can just say, but, like, if I gave it the right screens. Part of I'm trying to describe this right. Without a good client code example, which we can't share, it's hard to explain this, but, essentially, like, when you're making new screens, you as an engineer know generally how the components should break out of those screens. So, like, let's just say a front end example of some sort of form flow. Like, very standard, you have next buttons, back buttons, maybe you have a little dot navigation thing. You have your various forms. You have headers. You have descriptions. You kinda generally know how those should break out.

00:27:55 Nicole: You have Figma designs that are attached to that. So you put in all the Figma designs, you get all the code, and then you just tell the l m, I need this component, this component, this component, this component, and it just pops them out for you in artifacts, which are so nice because you just get the full file copy paste. Like, part of the reason I don't like cursor is when, like, if I highlight something here and then I'm editing here, it does it in line, but it doesn't always, like, get everything you need, and not everything you need is always, like, in the right place. And then Yeah.

00:28:24 Greg: Yeah. Yeah.

00:28:25 Nicole: The one thing I'll say on cursor too and you can see cursor when I scroll over or no?

00:28:30 Greg: I can see your cursor right now. Yeah.

00:28:31 Nicole: Okay. Cool. The code output I find in cursor, and my engineer finds this too, so it's not just me, but it is just the 2 of us. I've been asking around to see if anyone else knows. The code output is better from Claude. And I don't know if that is due to the system prompt in Claude, which they do publish. So I don't know if you guys you've looked into that at all, but it's fair I

00:28:55 Greg: haven't seen that.

00:28:57 Nicole: They have anthropic system prompt. It's in the API docs. Oh god. No. It thinks I'm a robot. These are really interesting. And so I imagine that cursor is maybe using a different system prompt.

00:29:15 Greg: That's the

00:29:15 Nicole: same thing.

00:29:17 Greg: I'm surprised that cursor wouldn't be able to increase the performance, like, if it's literally just for code.

00:29:22 Nicole: Yeah. And Claude I find that the code back from Claude is much better. I also, like, very rarely use chat GPT decode anymore. Mhmm. But occasionally, if I'm running into something very gnarly, I will go into chatgpt and get it. I find that clogged artifacts, like, these like, just being able to copy paste on the right is much easier. What I do end up doing a ton is and this cursor has this too is, like, if it's a really long file, like, 500 lines or something, it'll cut it down and be, like, you know, same code as before here.

00:29:55 Greg: Sure. And

00:29:55 Nicole: I just have to say, like, hey. Can you give me the entire file the entire file?

00:29:58 Greg: Yeah. Yeah. Yeah. Yeah.

00:30:00 Nicole: What I also find so the long, long files, basically, like, there's certain things with coding that I think are gonna stand the test of time with AI as it gets better, and there's certain things that I think will go away. I think the IDE as a concept, I think I'm, like, questionable on. Like, I personally do not like working out of the ID a IDE in an AI driven world. I go back and forth between that. I'm still in it. I get all the TypeScript stuff and like everything like that, that's what I use cursor for, it like fixes all my type errors because I'm like just a baby about typescript and like don't understand how it works.

00:30:36 Nicole: Little things like that, but the IDE is a pattern I'm not confident on. Small code files and things broken down into like proper component structures and like file structures being correct, I am actually bullish on. I think that will persist. I'm sure you've seen stuff with people where they're like I think you and I had spoken about this once where it's like, does it need to be an English language anymore? Can it just cut down to characters, like, short enough? Some of those things like that. You can imagine different ways of coding. What I think is interesting is I think English language is incredibly important or any language, language, just spoken language, human language, not

00:31:12 Greg: Sure.

00:31:12 Nicole: Computer language. File structure is incredibly like, structure organization is very, very important. Small components are important, like building blocks, puzzle pieces. So it's interesting.

00:31:25 Greg: Very cool. You know, it's funny you say that because I've heard an opposite not an opposite opinion, but a counter opinion, which is that file structures are just human conventions and that machines don't really care about final construct file conventions. So, yeah, I don't I don't know which one it's gonna be.

00:31:40 Nicole: No. I could not agree more that file structures are just human conventions. LLMs are human conventions. They operate on human language and human convention. And so what's fascinating about them is the better you are at human language and not machine language, I think the better you are at using the LM. And that's a very different paradigm than is traditional in engineering. Totally. Which is what I think is very, very interesting. It's really interesting because I do think one of the skill sets that I have, which has made me able to do what I'm able to do is, like, the human kind of communication and writing ability to get what you want out of the LLM.

00:32:22 Nicole: And it's, like, very, very interesting if that can evolve away because it is evolving more towards it for right now, which is fascinating.

00:32:32 Greg: Yeah. That's very cool. And another awesome thing too is you had a case study published by Anthropic on behalf of some of the work you did with one of your clients. What what what what was the story behind that one?

00:32:44 Nicole: Yeah. So that was really cool. We're just top users of cloud and cloud projects. We love it. I think it's the greatest thing that has ever been developed. If you took it away from me,

00:32:54 Greg: I would really struggle to run

00:32:55 Nicole: my business, which is probably like the highest NPS you can get for a product. And so they wanted to hear about how we were using it and potentially share that with more people. And so it was really cool because I think we're at least using cloud projects in a pretty different way than a lot of their other case studies. And so that was kind of cool to get, written up for that usage.

00:33:17 Greg: Yeah. They must have a data analyst looking at clog project usage and being like, dang, Nicole at head start. She's freaking crushing at least her team is.

00:33:25 Nicole: Yeah. No. It's it was a really exciting feature for us for sure.

00:33:29 Greg: Yeah. That's very cool. The question I wanted to ask, what other pro moves are you doing besides the random of course, we're using cloud projects, but, like, what other pro moves with AI encoding are you doing?

00:33:40 Nicole: I've been trying to explain it to other engineers and figure out what is different about how I'm using it. One engineer told me, he's like, you trust it much much more than I would ever trust it. And so we're I'm taking, like, swaths of code from cloud projects when I'm working on client projects and I'm just like copy and pasting over and everything's broken down in your typical engineer way, like we still have PRs, we still review the code, it still goes in in chunks, It's just happening that much quicker because we're really relying on a lot of the generation to write it. So I do think the trust level is there when we we use it for everything.

00:34:22 Nicole: So, like, I'm sure, like, when we write prompts for the AI implementations for our clients and then we're evaluating those prompts, we have our own evals product that we built, we basically, like, take the prompt, take the evals result. So, basically, if it's not passing a certain test case, we take all the reasoning behind that, and we feed it back into Claude to regenerate the prompt. So we have, like, a prompt loop in terms of generating all of that.

00:34:48 Greg: So you have not only your own eval tool, which you built yourself, but you also have your own, like, Claude, like, prompt optimizer.

00:34:56 Nicole: Yeah. We built that. We do have that. And then we also have, like, an AI computer agent that we build via cloud computer use. So, you know, the new functionality.

00:35:07 Greg: Oh, yeah. Yeah. Yeah. That's cool. You already built something on it.

00:35:09 Nicole: Yeah. Yeah. Our engineer, Tiff, she built it in, like, 2 days, and so we've been using that. It basically it'll create PRs for you for changes to the code. It's really good. It's kinda like a junior engineer, and we've been running that on all our projects, which is really, really cool. So it's been we've been able to build products in parallel with running the services and then leverage those products to be able to then, you know Yeah.

00:35:32 Greg: Work in the services. The computer use, are you doing that on a VM, or is that literally on her, like, laptop that she's

00:35:37 Nicole: We run it on our own laptop, so it's a script that runs right now. We're working on Dockerizing it and put it on the server, but there's actually a couple of nice things about running it on our laptops. It, like, uses the GitHub CLI, so it'll actually create a branch and do it from your own terminal, which then what I end up doing with it right now is I tell it the change. It creates the change. It creates the PR, and then you're already on the branch in your project, like in cursor. And so from Cursor, then I test it myself, like, locally. And so I can still do the testing. We don't have the AI agent doing the testing yet because that's obviously, like, a little bit more complex.

00:36:13 Greg: Yeah. Yeah.

00:36:14 Nicole: And then I can make the change either personally or I'll have the AI agent do it. And so we have that kind of tight feedback loop within it. And what's so cool is, like, we're dogfighting our own product. Yeah. So we can immediately, like, just you know, this thing didn't work. Like, it created an empty file and it got caught in a loop or whatever else. But computer use is, like, very powerful.

00:36:39 Greg: Yeah. That's wild. I wanna ask about tool stack. It's always cool to hear which tools people have in case there's one that I'm, like, isn't part of my routine yet. So what's in your AI development stack, and what are you using besides Cloud Projects and besides Cursor?

00:36:50 Nicole: Cloud Projects, Cursor, Chat GPT, GitHub, obviously, huge fans of GitHub. We use we build a lot of, like, different products for people. So we use React Native for mobile apps. Gotcha. We use any kind of full stack application. We use Vercel a lot for new projects. I like you using Vercel to spin things up. A lot of Node on the back end or Python on the back end is typically what we use. We work within any client's existing code base, so we'll use whatever they're using across the board. But when we're doing net new, typically, it's like React or React Native front ends, TypeScript, Node. Js, Python back ends, depending on what we're building microservices.

00:37:34 Nicole: We're typically using Python for different databases, stuff like that. Right. Yeah. It's kind of we we can like, we're just paying for Claude, Cursor, OpenAI, and then, like, the usual business stuff, Slack Yeah. Etcetera.

00:37:50 Greg: What that doesn't sound like a lot. What are your what are your margins for your business?

00:37:55 Nicole: Oh, we just I mean, we pay engineer salaries plus, like, $60 a month per employee for, like, all the AI. Like, nothing, basically. All our clients pay for the for their AI costs and everything. So

00:38:09 Greg: Yeah.

00:38:10 Nicole: And we have a little office in New York. So those are basically the costs for the business. I still think it's insane that through OpenAI and Anthropic, you can get this tech for $20 a month or $30 a month, whatever it is.

00:38:22 Greg: Absolutely wild. And that yeah. That's crazy because, like, you're not even paying the API costs for your cloud projects. Right? You're just paying the business enterprise. What? Sorry.

00:38:29 Nicole: We pay the API cost for our own test projects.

00:38:31 Greg: So that's

00:38:32 Nicole: the but, honestly, it's pretty much driven to 0. Like, we do a lot of quoting for our clients on how much the AI will cost them, and it's so like, it's, like, 0.00002 then multiply it out. Like, it's hard to get numbers up there.

00:38:49 Greg: Yeah. Yeah. Yeah. For sure. For sure. So one thing that's come across in this interview is you have a lot of technical confidence, which is very cool. It's like any project, just bring it on and we'll go do it. What types of technical projects were you steer away from? Like, which one are you saying no to that clients bring to you?

00:39:06 Nicole: Like, below the minimum fee, basically. We're we're not afraid of anything. We try to take on the hardest projects that we can. We believe and it's written down on, like, our strategy board that access to hard problems is like a proprietary business value for us. So we think that having that access is incredibly powerful, and we definitely don't stray away from it. Sometimes it is scary because we

00:39:30 Greg: have to do really hard projects that we don't really know

00:39:33 Nicole: how to do on the onset, but we are able to figure it out and have that confidence. But, yeah, typically, like, we will do anything that people think is impossible, and we like to do that. One, because we can charge for it, which, you know, as we should if people think it's impossible, but also because we learn from it, and we think it makes our business more powerful.

00:39:54 Greg: That's wild.

00:39:55 Nicole: And the clients are happy because we're doing essentially the impossible for them.

00:39:58 Greg: Yeah. What else is on that whiteboard of values that your company has?

00:40:03 Nicole: Oh, yeah. So we're very values driven. Company values are simplicity, patience, and compassion. I pulled up from the Dao De Jing, the Steven Mitchell translation. It's one of my favorite books, and so that's kinda like the core values. But in terms of, like, access to, like, the hard problems being proprietary, like, that's where we think about, like, our modes. Like, what is more of, like, the power for the business, and it's not how I use Claude even though how I use Claude is, like, driving the business forward. It's very much kind of, like, you know, doing good work for our clients is number 1.

00:40:35 Nicole: Client NPS is the thing that we care about the most. Our clients understanding the type of work that we can do, and the quality of work we can deliver for them is number 1, most importantly. Efficiency of the business, so we track revenue per per employee. That's really important as we grow the business.

00:40:51 Greg: Which it sounds like that is just insane right now.

00:40:54 Nicole: It's high right now. Yeah. And we have to figure out how to continue to scale it. Basically, because we're a services business and we're also investing in the product, that it doesn't scale like every other business. It's more jumpy because as we take time to build product, we'd have less time for services. So we're figuring out how to, like, even that out a little bit as we grow. But, yeah, we think communication client NPS, access to hard problems, being able to solve hard problems, being able to do things in a repeatable way, creating network effects within the business, and, like, good data, even, like, good data in terms of, like, how we use the LLMs.

00:41:31 Nicole: Sure. Good code is kind of proprietary. And so, yeah, we think about it a little bit differently than, like, how we use LLMs as prior proprietary.

00:41:41 Greg: Sure. What about businesses that are 2 to 3 years ahead of where you are right now or that you wanna be or 2 to 3 steps ahead? What what do those businesses look like?

00:41:51 Nicole: I mean, I think this is just gonna make me sound like a crazy fangirl, but I think Infronix is the greatest business out there right now. I think what they're doing is incredible, and, like, the product itself that Claude is is really, really powerful. Sure.

00:42:03 Greg: No. I mean, like, where do where do you wanna go, though? Like, where do you wanna take your business, like, in 2 to 3 years?

00:42:08 Nicole: Oh, that's a great question. We're trying to grow in an AI native way. So we want to continue to hire, but not hire to the scale of the client work. We wanna hire and then train up a team that can then do that work exponentially. We think that the progress of AI is inevitable, and we're building into that inevitability as a company. And so whether or not the AI will be able to do this work completely next week or whether that will happen in 2 years. To me, it's inevitable even if it's 10 years. And so whatever that timeline is, we're kind of just making sure that we are the best at using the AI, and we're best at implementing the AI in the business.

00:42:50 Nicole: And if we do those things, I think the business will continue to grow.

00:42:54 Greg: Yeah. Well, what about products or SaaS? Is that in your future?

00:42:58 Nicole: So the way we're thinking about products right now is we have them, internal products that we've built. We are gonna continue to build those, and we want to be the users of our products first and foremost. We're not trying to build products for other people right now. We're trying to build products that make ours us more efficient across the board. If we can do that and we've productized that in a really powerful way and we feel like maybe selling it, it's an option, but it's not the priority.

00:43:26 Greg: Cool. Beautiful. Nicole, that was fabulous. Thank you very much for joining us today.

00:43:31 Nicole: Thank you, Carrie. It's always great chatting with you.

On this page