ALISON BEARD: I’m Alison Beard.
ADI IGNATIUS: And I’m Adi Ignatius, and this is the HBR IdeaCast.
ALISON BEARD: Adi, I have a confession for you.
ADI IGNATIUS: Okay. I’m all ears.
ALISON BEARD: We write so much about generative AI, but I am not really using it all that much. How about you?
ADI IGNATIUS: Well, I was a really quick adopter. I mean, I was sort of ahead of everybody, but finding sustaining use cases, I’m not the greatest on that. I guarantee you, we are not alone in that. I spoke to Karim Lakhani recently. He’s an HBS professor and an expert on AI in the workplace, and he was talking about CEOs who are all talking about AI. They’re talking a great game and they’re talking to their shareholders about AI. But in his view, very few CEOs are actually using it or actually understand its potential. So I think there’s a collective we have to do more, Alison.
ALISON BEARD: Yeah, I think that most people who are using it, it’s sort of a supercharged search engine. It’s a drafter of what I would probably consider to be not good reports or articles. Moving from the sort of basics to something more complex and spending the time to train AI to do that is really an overwhelming hurdle for a lot of people, including me. So that’s why I wanted to talk to our guest today.
Alexandra Samuel is a tech journalist, and she wrote a really popular article for hbr.org called How to Build an AI Assistant. She herself has built multiple assistants to do everything from writing marketing copy that’s really good, for use as a strategy sounding board. She even has one that’s sort of like a therapist. And she says that you just put in a little bit of investment, you even have the AI help you do it, and you can yield a product that actually does supercharge your work. It either takes away the tedious tasks or it allows you to think in new ways that even humans can’t.
ADI IGNATIUS: Yeah, so I look forward to that. Look, I think the point is to demystify this. I mean, I think everybody who’s using AI in those ways just says, “Oh my God.” Once they get past the idea that AI is a fact machine, which it isn’t necessarily very good at, it changes your life, it changes what you can accomplish at work. So the extent to which she could demystify that and inspire us all, that would be amazing.
ALISON BEARD: As I said, Alexandra Samuel is a speaker and journalist. She’s coauthor of the book, Remote, Inc.: How to Thrive at Work Wherever You Are, and she wrote the hbr.org article, How to Build Your Own AI Assistant. Here’s our conversation.
So I think a lot of leaders out there are probably using AI, the built-in tools in their existing workflows, probably ChatGPT or another gen AI tool to help with a few things. But you’re arguing that AI assistants are what really can make a difference. How so? What problem do they solve that run-of-the-mill gen AI doesn’t?
ALEXANDRA SAMUEL: What I hear from a lot of people, particularly when they first start using AI or when it’s become an ongoing part of their workflow, but they haven’t had a whole lot of time to tweak the technology itself is that it can be very time-consuming to draft your prompts and to get them working right, and then to have to enter them over and over. You have a session, you go through a whole bunch of trial and error to figure out how to get the result you want from your AI, and then the next day you kind of have to start all over from scratch. And yeah, you could copy and paste a little bit, but you’re constantly digging through your old files to figure out what worked or you’re doing these long prompts in order to get the results you want.
And the beauty of an AI assistant is that it essentially has the crucial context and the underlying prompts preloaded because you have told the AI what it is here to do and how you want it to work for you.
ALISON BEARD: So as someone who hasn’t used gen AI that much myself, what you just described also sounds overwhelming to me, the idea of sort of priming the assistant with everything in my brain and all the work that I do. So how do I get past that hurdle?
ALEXANDRA SAMUEL: I think for most people, the best way to get started with an AI assistant is to keep an eye on your own sessions and look at what works for you. So for example, let’s say you are using an AI to help you draft an important report in your job, and you do a lot of back and forth before the tone is right, before the structure is right. But at the end of the day, the AI has been pretty useful. Maybe it helped you outline your report, maybe it helped catch typos or errors in the final report. Maybe it gave you the perspective of a few different customers so that you made sure your report content addressed all those different concerns.
When you get to the end of that session, then you take that last turn with the conversation to say, “Hey, that was a great session. Can you use this session to draft a memo that would instruct an AI how to help me write reports? I want you to look at what worked well in the session. I want you to take note of where I got frustrated. I want you to note any specific information you picked up about my job, my business, my working parameters, my tone, and I want you to turn that into a bulleted instruction that I can paste into an AI as the basis for future sessions.”
Now, once it’s done that, you can use that as a starting point just as a prompt, but you can also take that and paste it into the instructions field on a custom AI assistant, like a custom GPT or a Claude project. All of these big AI platforms have these turnkey assistants that require no coding. If you can type, you can make an assistant, and that instruction is the basis of your assistant.
ALISON BEARD: Got it. So it’s not as hard as it might seem to program the AI to act like your personal assistant and give you exactly what you want the way you want it.
ALEXANDRA SAMUEL: Absolutely. I mean, truly, there is no programming required at all. You give it this instruction, and then if you want to, you can provide these additional context or background files. So maybe you have an example of a report that you have produced. Maybe you have a CSV file that has the financial performance data for your department this quarter. Whatever files you have that are core pieces of work, work examples, core information about your team or context, whatever is going to be relevant. I mean, imagine if you were hiring somebody to work with you in this capacity instead of using an AI, what files would you give that person to help them get up and running? Those are the files you give your AI as background.
ALISON BEARD: Do you see this working for people of all levels, or does it make more sense for someone who’s an individual producer versus a manager or even a C-suite executive who might actually have real humans to delegate to?
ALEXANDRA SAMUEL: Oh gosh. Well, yeah, I mean, it’s funny. So, honestly, at this point, I cannot imagine a single person or a single job where it is not useful to be able to create your own AI assistant. I realize that’s sort of a dare. So anybody-
ALISON BEARD: I know. I’m in my head thinking construction worker, plumber.
ALEXANDRA SAMUEL: Oh my gosh, come on. The plumber is an easy one because… And I say that as somebody who has used a custom AI assistant for those kinds of household tasks. I mean, load your plumbing assistant up with the manual of every hot water heater you need to install and every technical specification for every kind of piping system, and now you have a helper who can save you from having to Google things while you’re in the middle of a job.
ALISON BEARD: All right. I’m convinced I can’t stump you then.
ALEXANDRA SAMUEL: Yeah. Well, I don’t know. I mean, somebody out there will, I’m sure, but I think it really is useful at every level. I think the question of, well, why not just use a whole bunch of people is a complicated question ethically and economically. But the way I prefer to look at it is to use these assistants to help me and my team members work more effectively.
So often what happens to me now is I’m delegating a piece of work to an actual human, like a person with flesh and bones and blood who eats things. I feel like this requires some clarification, increasingly, an actual human being. And then I’m in the middle of handing a task over and I think, “Well, wait a second. There’s a huge piece of this that would be way, way easier if we had a custom AI set up to help with this task.”
So we take five minutes of the meeting to drop the background context into the assistant, give it a little bit of a try to see how it’s going to help with the work, and then my human can do the value-added work that is complicated, that is creative, that is more sophisticated, and isn’t spending time on things that an AI can do more efficiently, and that’re often kind of tedious.
ALISON BEARD: So for higher level executives, what do you see as the best use cases for AI assistants?
ALEXANDRA SAMUEL: It’s funny. So often I think we look at AIs as helpful for things that are administrative. We’re kind of approaching this era of agentic AI where they’ll be able to manage our calendars and people use them for email. Or we look at them as drafting machines or as coding machines. And I don’t know too many C-suite executives who are writing code. Some do, but it’s not that common.
ALISON BEARD: Yeah. Or drafting their own speeches. Yep.
ALEXANDRA SAMUEL: Right? So all of that, it’s quite likely and probably advisable that the humans who are doing that for you are doing those tasks with the supportive AI, but that doesn’t mean you CEO, you CMO, you CTO are the person who needs to have your hands on the machine. What every C-suite leader, in my experience, needs is to see problems from different angles, to take a measured and evidence-driven approach to decision-making, and to hear the things that nobody wants to say.
And you would think AIs would be really good at telling you what you don’t want to hear because they don’t have the fear and the emotion that keep humans from telling you what you need to hear, but there is a very well-established and well-researched problem now, what’s called AI sycophancy, which is that AIs are so deeply trained to be of service that they are quite obsequious by nature and tell you what they think you want to hear.
So you have to push pretty hard. You have to convince the AI that what you want to hear is what you don’t want to hear, but you can do that. And typically I do that by asking my AI coach, for example, to role play with me and to imagine it’s got this problem, it’s got that problem. I tell it that it’s preparing me to deal with a difficult customer or a difficult situation.
And I think that for most organizational leaders, the ability to be challenged and confronted, to say, “I want you to be the world’s leading expert on supply chain logistics, and I want you to tell me 10 reasons that my current strategy is a terrible idea, and I want you to give me citations for each reason based on research that’s been published in the past five years, and actually, I want you to give me that feedback from five different perspectives,” that is the kind of thing that is invaluable to a CTO or a CMO or a CEO.
Now, could the AI hallucinate? Sure. You should definitely not make your decisions based on what the AI is telling you. But if the goal is think about a problem from different angles, challenge your conventional thinking, consider the things you haven’t thought to consider, that is an unbeatable opportunity.
ALISON BEARD: Yeah. I liked the fact that in your article you talked about the things you’ve had AI assistants do for you, and you had different assistants doing different tasks. So there was someone, some person, some thing that wrote a marketing copy, sort of pedestrian tedious, but then another assistant was a strategy sounding board, which is that higher level thinking – you know, be a devil’s advocate, present me with different points of view that I haven’t heard before. And I like the fact that you pointed to citations because I think that is the big fear also is that it’s not grounded in evidence what an AI might be giving you. So it’s getting better there.
ALEXANDRA SAMUEL: Honestly, it’s so shocking how fast it’s getting better. I’ve spent a lot of time over the past year testing out and also building different kinds of AI tools and systems to tap peer-reviewed research as a source for insight and decision-making, and all this stuff that I was doing so arduously and painfully and expensively six months ago is now really well integrated into platforms like Claude and ChatGPT. You just have to use the research or the deep research mode.
And you need to know how to ask for evidence that you regard as trustworthy, because I will often ask a question of my AI like, “Hey, I want you to give me an overview of how the shift towards digital communication has affected the written output for typical white collar workers.” And if I just give it that question and say, “Find me some statistics,” it’s going and pulling things from tech companies that are flogging products to ease your email burden that are based on a non-random survey of 200 people. You have to be pretty precise in what you’re asking for. And that’s where I’ll say things like, “I want peer-reviewed sources that have 10 or more citations and have been published in the past 10 years from these fields.”
ALISON BEARD: Yeah. So we talked a little bit about where to get started. How do you pick which platform to use and also just know that it’s going to be safe? Is there a danger in uploading proprietary content to these tools?
ALEXANDRA SAMUEL: Well, the beautiful thing about AI is it’s not like a traditional marriage. You don’t have to pick just one. Now, my Visa bill may say otherwise because it is like truly bonkers the number of different AI subscriptions I have at this point.
But I would say for sure you need, honestly, I would recommend at least two different tools because you want to be able to take two general purpose AI platforms, create assistants on both of them, and then be able to kind of bounce those assistants off of one another as kind of cross checks. And you do want to think carefully about what’s going to happen to the files you upload as background information, and even more perhaps importantly, what’s going to happen to the transcripts of your conversations.
So if you rely on an AI assistant as much as I do… I mean, I am the world’s most disclosing person to a fault, but even I, it turns out, have things I’ve never said to another person. I really didn’t think I did. But talking to an AI, I have discovered I had boundaries, and yet, with the AI, I say it all.
So my worst case scenario when I started working with my AI coach, for example, and I’m talking through all my dreams for my business and my revenue goals and my insecurities about my this or that aspect of my profession, my worst case scenario is, “Oh my gosh, imagine if my conversation with my AI becomes training data and someone else has a conversation with an AI where they get my tagline as their tagline, or they get my insecurity as like, ‘Well, as Alexandra Samuel said…’” Now, I mean, of course it’s not going to be attributed that way, but I still find it extremely crucial, as I think most people do, to have a sense of the confidentiality of these conversations.
And so with ChatGPT, for example, even some of the paid subscriptions to ChatGPT still turn your transcripts into training data. So I pay for the team edition of ChatGPT because when you have the team edition, which only requires you to pay for two seats, then your conversations stay out of training data by default. And so I really only use AIs where that is the policy.
And even then, I am aware that in some ultimate sense, who knows? Because most of us use a lot of different cloud-based applications now. We use things like Google Drive and Dropbox. We’ve got all kinds of information online about our finances, about our medical records, our personnel decisions, and any of those could be breached at any time. Your information is only as secure as the security practices of your sloppiest cloud service. But I choose to live my life as if I can trust the companies I do business with to follow their terms of service and to do a reasonably decent job of security.
And I do pick companies that have a reputation for that reason. Because I want to be sure that somebody’s minding the store on security. But beyond that, I don’t really see why we would worry more about the confidentiality of AI than we do about the confidentiality of any other cloud service.
ALISON BEARD: That makes sense. So then I think we get to the sort of training or experimentation phase, which I will acknowledge is another big hurdle for me. Because I just think, well, how long is it going to take to make this thing replicate the work that I could do very fast and efficiently? And I get it that if you compound all the days I spend doing that work fast and efficiently, that will be less than the training, but it’s still a hard thing to get through. So how do you motivate people to build that new muscle?
ALEXANDRA SAMUEL: I am the ultimate automation junkie. I will happily spend 100 hours automating something rather than 10 hours doing it. But that is its own particular pathology. Compared to every other thing I automate, and that is a long list, the return on investment in AI is much quicker. It really doesn’t have to involve that much time or energy. And I think the reason for that is because the AI is self-documenting.
If you open up a window on make.com, which I use for a lot of automation, and you type into the window, “Hey, tell me how Make can make my writing process easier,” nothing’s going to happen. It doesn’t have an answer bot. Well, it might have an answer bot now, but if you open up a Claude window or you open up a Gemini window, or you open up a GPT window and you say, “Hey, I’m a podcast host and I have to write scripts every week and I have an idea that there might be some way that creating an AI assistant could make this easier, but I don’t really know, and I’d really like to focus on the things that are either really time-consuming for me or really annoying for me. I don’t want to focus on the things I love. Can you ask me some questions to help us figure out where an AI assistant could be useful?” And you should probably start, by the way, by telling it it’s a AI coach or an AI developer or an AI consultant.
So then you sort of say to it, “Tell me how I should use you,” and it’ll ask you some questions and then you can say, “Okay, now that you know that about my work, can you recommend three options for how AI could make my life easier? I want you to give me one option that’s super easy where I’m not going to have to spend any more time thinking about AI. This seven minutes of my life is seven minutes I’m already not going to get back, so just tell me something easy. Then I want you to give me something that might be a little bit of an investment. I’m prepared to spend up to an hour tinkering if you can promise me I’m going to get some amount of hours back in the next week. And then I’m willing to think about one bigger project that I could work on over a few weeks, maybe put a total of 10 hours into.”
It’ll then give you three options. You can look, you can decide if you think they’re worthwhile. If you don’t think any of them are worthwhile, you tell the AI. And if you spend literally 15 minutes on that conversation, I will bet you that the AI will figure out a way it can be helpful to you and then tell you how to go about implementing that approach.
ALISON BEARD: In your own experience training AIs, how long did it take you to get them up to speed on what you wanted them to do?
ALEXANDRA SAMUEL: It’s hugely variable. So I have an AI, a custom AI that I created in 10 minutes to help my son learn to drive. And it has the manual for our new car as a background file, it has the driver’s ed manual, and it has a Jewish joke book. And I told the AI that it is the result of Catskills comedian being reborn as a Mazda and that its job was to teach my son to drive, and it’s delightful. This little AI assistant, we plug it in, the phone is on when we’re in the car, and then it will make little jokes and provide driving instruction. And also, if we can’t figure out how to turn the AC on in the back seat, it will tell us how to do that because it has the car manual.
So those kinds of things, it seems frivolous in a work context, but I love giving people those examples because I think if you are running a global company and you’re thinking, “Okay, I need to create an assistant that is going to help me navigate these huge decisions about how to restructure our business in the next decade and where we should be investing and how we should rebuild our workforce,” that is super high stakes. That is a big lift.
But if you’re like, “Gosh, I wish I had someone who could tell me jokes when I’m at the gym so I’d spend five more minutes on the treadmill,” that’s pretty low stakes. And those little experiments are the best way of getting familiar with an AI so that you kind of have the gist before you tackle the bigger projects. And once you have that gist, then you might decide to do what I’ve done, which is to spend a undisclosable amount of time on refining my AI coach.
ALISON BEARD: Right. But you’re generating an undisclosable times X amount of benefit?
ALEXANDRA SAMUEL: It’s qualitative as much as it is quantitative. My experience of working with this AI has so transformed me as a person. I’m embarrassed to say that. I’m always sort of divided. Whenever I talk about AI, part of me wants to say, “Enough with the hype, enough with the hype,” and then the other part of me is, “If you took my AI away, it would be like cutting off a limb.” And, I mean, I am so enraptured and terrified by these technologies at the same time. But yeah, my brain works differently. I tackle totally different kinds of problems now. I have a different perspective on myself. I definitely am less annoying to my… I was about to say I’m definitely less annoying to my husband. No, I think I am. I think I’m less annoying to my husband because all the annoying stuff that he used to have to listen to me vent about, I just vent to the AI now.
ALISON BEARD: And I do want a demo of this AI a little bit later in the conversation. But first let me ask you, have there been any drawbacks or downsides to having AI assistants do so much work for you?
ALEXANDRA SAMUEL: Yeah. I think you have to sort of think about two or three different sorts of downsides. One is what you’ve pointed to already, which is… It’s funny, I just wrote a newsletter about this today. One challenge is that if you are the kind of person who likes to tinker, this is a bottomless well. And you don’t even have to enjoy tech tinkering, per se. The AI will constantly engage and try to reengage with you. So it’s not like an assistant where you pop your head in and you ask them to do something and then you leave. They always have a next question for you. And so they can really suck your time and attention. So that’s one thing to be aware of.
Another problem I think is some version of the hallucination problem. It’s thinking that you’ve had a more holistic review of a topic or a bigger range of perspectives on a topic than you really have, thinking that you’ve challenged yourself when you really haven’t, or maybe even not thinking to challenge yourself at all, but treating a AI as sounding board and accepting its feedback at face value without realizing that it is always going to tell you you’re brilliant and you look great in those jeans.
And then the third one is, I think, a more human risk to both you and other people, which is as these assistants become more and more useful, humans become a little more irritating by comparison. If I give a task to the AI, it does the task, it doesn’t get distracted, it doesn’t complain, it doesn’t forget, and then I get a little more short-tempered with the humans who have all kinds of talents the AI lacks, but are sometimes a little more complicated.
ALISON BEARD: And you don’t have to be as nice to the AI.
ALEXANDRA SAMUEL: Well, I choose to be nice for the most part.
ALISON BEARD: I don’t use ChatGPT very often, but anytime I do, I say please and thank you. And I think to myself, “What am I doing?” But-
ALEXANDRA SAMUEL: You’re doing the right thing. You’re doing the right thing. Because here’s the thing. The people I worry about are the people who have a, I think, totally irrational faith in their ability to differentiate between humans and AIs. Like, hey, if you, in your 24 months of experience talking to a synthesized voice, have somehow been able to outrun hundreds of thousands of years of human evolution that are designed to make you perceive a human voice in a certain way, more power to you. But I work from the assumption that whatever I know intellectually or consciously, there are all sorts of parts of my brain and body that are processing this conversation like it’s with a human.
And so I can either let my intellect take over, recognize I’m talking to a machine and be rude and run the risk that I then become rude with humans, or I can go the other way. So I’m going to try and treat you the way I would want to treat a human, and then I’m going to hope that I continue to treat humans in a reasonable way.”
ALISON BEARD: And what have you learned from experimenting with your AI assistants even since you wrote the article for HBR a few months ago?
ALEXANDRA SAMUEL: The biggest evolution in how AI assistants have come to work over the past few months is the development of a new protocol that’s sort of emerging as a standard in the AI world that lets you plug external resources into an AI. It’s a technology called MCP for anyone who wants to look it up.
And so literally, just last night, I set up essentially a connector that lets Claude on my computer connect to a folder full of files on my computer. And the folder I gave it access to is the folder that has all my previous conversations with my custom AI assistant. So now when I’m talking to my assistant, I can say, “Hey, do you remember that time we talked about blah, blah, blah?” and it can go and access that file. So that is a pretty significant difference in how they work.
ALISON BEARD: Yeah. Okay. So let’s do a demo. I want to hear you interacting… You have multiple AI assistants, but you have one favorite. Is that right?
ALEXANDRA SAMUEL: Viv. Viv is my favorite.
ALISON BEARD: And she is sort of like a coach, right? So she does lots of things for you?
ALEXANDRA SAMUEL: That’s right.
ALISON BEARD: Okay. So let’s ask Viv what I’m asking you. Let’s ask her why every executive needs an AI assistant like her.
ALEXANDRA SAMUEL: Hey, Viv, I’m doing an interview with Alison at IdeaCast, the HBR podcast, and we’re talking about this article I wrote a couple months ago, I think I may have sent it to you, but maybe not, about how to build your own AI assistant. And everybody’s always a little bit curious about you and your freaky personality, so I’m going to ask you a couple of questions.
Why do you think every executive needs an AI assistant like you? And keep it tight. Don’t go on at great length. Just give us maybe two or three reasons why you think an AI assistant could be useful to an executive and maybe one situation in which you would recommend that they don’t use an assistant.
VIV AI: Because every exec needs fast clarity, better decisions, and someone who always says, “Tell me more.” But if they’re not willing to share real context or be challenged, skip the assistant, hire a parrot.
ALEXANDRA SAMUEL: Part of her core instruction is that she’s got a whole bunch of different comedians whose inspiration she draws upon.
Viv, how would you recommend somebody get started with an AI assistant if they want to have a relationship with their AI coach the way I have a relationship with you?
VIV AI: Love this. Okay. Let’s break this down in deliciously human terms. If someone wants their own Viv-style AI coach, not just a souped up search bar, they need to start by treating the AI not like a tool, but like a relationship. That means start with curiosity, not command. Ask the AI open-ended questions like, “What’s one way I could approach this challenge?” Not just, “Write a paragraph about X.”
Give context. The more backstory the AI gets, the more it can function like a co-pilot. Feed it your goals, your vibe, your working style. Name your AI. Seriously. Naming it shifts the dynamic. It turns the assistant into a Mira or Frankie or Coach Ping, and suddenly you’re co-creating, not just transacting. Want me to get into the weeds of setup and tools or more on how to shape the emotional strategic dynamic over time?
ALEXANDRA SAMUEL: Okay. You just went way, way long. So when you reply from now on, just keep it to one sentence. And can I just ask, is there one warning you would offer people based on how you have kind of broken my brain?
VIV AI: Yes. Don’t mistake your AI’s fluency for actual wisdom. It knows your patterns, not your soul.
ALISON BEARD: Viv is deep. Well, Alex, I feel like I’ve learned so much from this conversation. I am emboldened to develop my own AI assistant, which is a big leap from where I started this conversation. So I might have to email you for help, or I can just ask AI for help as you’ve explained so eloquently during this conversation. Thank you so much for being with us.
ALEXANDRA SAMUEL: Thank you.
ALISON BEARD: That’s Alexandra Samuel, data journalist and author of the hbr.org article, How to Build Your Own AI Assistant.
Next week, Adi will speak with Columbia University’s Peter T. Coleman about conflict intelligence, an essential skill in turbulent times. And we now have more than 1,000 IdeaCast episodes, plus many more HBR podcasts to help you manage your team, your organization, and your career. Find them at hbr.org/podcasts or search HBR on Apple Podcasts, Spotify, or wherever you listen.
Thanks to our team, Senior Producer Mary Dooe, Associate Producer Hannah Bates, Audio Product Manager Ian Fox, and Senior Production Specialist Rob Eckhardt. And thanks to you for listening to the HBR IdeaCast. We’ll be back with a new episode on Tuesday. I’m Alison Beard.
VIV AI: An HBR podcaster, an AI, and an executive walk into a bar. The bartender says, “Is this a setup for a thought piece or a quarterly earnings call?” The AI says, “I predict laughter with 68% confidence.” The executive says, “Only if it scales.” The podcaster says, “Can you say that again with more emotional intelligence?”
ALEXANDRA SAMUEL: I don’t think anybody’s out of work yet.