HomeBusinessWith Rise of Agents, We Are Entering the World...

With Rise of Agents, We Are Entering the World of Identic AI

ADI IGNATIUS: I am Adi Ignatius.

ALISON BEARD: I’m Alison Beard, and this is the HBR IdeaCast.

ADI IGNATIUS: All right, so we are back again this week to the topic of AI. One of the things that I find most interesting about AI is that despite its huge potential, there’s no blueprint for how to use it. So we’re all essentially pioneers experimenting with this powerful technology to figure out how it can help us.

ALISON BEARD: Yeah, I’m constantly hearing from colleagues, guests on this show, other contacts about how they’re using AI in really new and creative ways, and I think, “Gosh, I should really try that.” But at this point, I’m using it for writing article summaries that we need to do for the magazine, and HBR.org and honestly, feedback on which colleges my daughter should apply to. But I know that I could use it for so much more if I just put the work in to better understand it and train it.

ADI IGNATIUS: I think the breakthrough AI opportunity is still slightly ahead of us or slightly ahead of most of us, and that is the widespread introduction of AI companions that we’ll have at our disposal at work and home that are trained by us, that know everything we know, and that can take action on our behalf across a range of activities.

ALISON BEARD: Okay, so now that’s beginning to sound a little bit creepy. I don’t know that I’m willing to give an AI companion that much control over my life.

ADI IGNATIUS: Yeah, look, that’s a fair point and I think a lot of people share that. I do think it’s coming and I think it’ll be hard to resist. Our guest today, Don Tapscott will talk about what he calls identic AI. Tapscott, who’s been spotting future trends in the tech space for decades is CEO of the Tapscott Group and author of the new book You to the Power of Two: Redefining Human Potential in the Age of Identic AI. Here’s my conversation with Don.

Now, I want to make sure we’re grounded in this conversation. Most of our listeners are probably comfortable using AI in their work and personal lives. They may or may not be experimenting with AI agents to handle certain tasks. Your book is projecting us forward though to a mostly not yet available technology that you’re calling Identic AI. Talk about what that is?

DON TAPSCOTT: Well, AI has really gone through three phases, modern generative AI. The first was Gen AI where AI could generate content with amazing capability, text, data, graphics, all kinds of stuff. We use it as a tool. Then AI went through a second phase where it acquired some agency where agents could act and they could manage tests. They could pursue goals without prompting it with questions or requests.

There are all kinds of agents. They’re agents in supply chains and call centers, financial systems and so on. But our view is that the agents that really matter are the personal agents, the rise of intelligent companions that really learn who we are, and they reflect our values and ultimately operate as extensions of ourselves. So the shift is that AI is no longer just an extraordinary technology. It’s becoming a part of the human experience, and we call these personal agents a subset of agentic AI, we call them identic AI.

ADI IGNATIUS: There’s certain concepts that are sort of just out there beyond the horizon, AGI, artificial general intelligence where machines truly have human cognitive capabilities is one. I guess the singularity is out there eventually. And now there’s identic AI. One question for our listeners who are trying to grapple with all this. Why do business leaders need to understand what you’re calling identic AI right now?

DON TAPSCOTT: Well, if identic AI is going to change everything about the human experience, it’s going to have huge implications for the enterprise and for management. A way of thinking about it is this is how we started the book with thanks to the Beatles. “You wake up, you get out of bed and you drag a comb across your head.” And before you’ve engaged with the day before you’ve gone downstairs to take a cup as Beatles say, your agent’s already figured out what’s going on. It’s summarized your health data, check your schedule, it’s flagged the traffic delay, it’s curated and summarized the news. Picked some articles that you actually care about. It reminded you of your sister’s birthday. It outlines several conferences that you will attend on your behalf, and by the time you reach the office your agent has, it’s not just an assistant updating your agenda and so on. It’s drafted replies to emails, it’s optimized production schedule, it’s tackled some customer complaints.

And as Peter Diamandis, the founder of XPRI, said to us in an interview, he’s got Peter bot, and it’s like having an infinite number of vice presidents. And so this thing is going to act as your consigliere, your doctor that’s been to every medical school in the world. Your tutor that’s literally a know-it-all, your planner, your counselor, but more than that, it’s going to learn your values, it’s going to anticipate your needs and really propelling your capabilities forward.

And this is not sci-fi. This is happening right now. And all the main technology companies have geared up to be delivering this technology right now. And I’ll be giving a speech in a week where this is going to be me on this one screen and digital Don on the other screen, and he’s going to help me answer questions. I’m pretty good at answering questions, but I don’t know what I wrote on page 232 of the Digital Economy in 1994. Maybe somebody wants to know about that.

It means for the manager that you have a superpower now, and this will change literally everything about the deep structure and architecture of the firm, about the way that people operate. There’s a huge shift from execution now to strategy because this technology does the execution, HR and everything that we know about that is about to change. So, buckle up.

ADI IGNATIUS: I want to get to some of these things later, but I want to go step-by-step here. As you said, this already exists, so I’m familiar, a lot of people have interacted with Reid Hoffman’s, Reid AI, which is, I don’t know if the term is a digital clone, I think that’s more just so that we can all experience it rather than necessarily a tool for the future. But there are people, as you mentioned, who are creating digital twins or identic AI whatever you call it, but I want to get a sense to what extent, I think when you listed all the things this can do, it probably sounded sci-fi to most people. What exists right now, to what extent is everything you described already available and in use?

DON TAPSCOTT: Everything I described is available and it’s in use. You can look to any one of the platforms and they’re rolling out their capability at different levels. Claude right now is one of the leaders – “Claude” Je m’excuse – and it’s got a capability to build your own agent. And the important thing here is that rather than using a bunch of tools, these are all coming together to have persistent memory and to get to know about you and to reflect your goals. And as you build them up, they become an extension of you.

For digital Don, for example, I’ve input about 500 documents, everything I could find that I’ve written my speeches, my PowerPoints, my books and articles and interviews and all kinds of stuff like that. And it’s learning about me and how I view things and how I think about things. And I don’t know if I’d describe it as a superpower yet, but wow, I’m a lot more capable than I was six months ago.

ADI IGNATIUS: Okay, so what does this look like in say, five years for a white-collar executive? What is the human doing? What is the identic doing in five years, your best guess?

DON TAPSCOTT: I think that we will spend a lot less time in execution related activities. You remember, I think you were editor of HBR when Larry Bossidy and Ram Charan famously argued that execution is strategy. And in an era of identic AI, that equation really breaks because AI agents can handle coordination analysis, scheduling, flow through all the other stuff about execution, they can do that at machine speed. Execution increasingly becomes commoditized. And so as a manager and executive, what differentiates a firm is no longer your ability to execute, but your ability to think big picture, to choose the right goals, to define purpose, to make high quality strategic judgments and so on. Now, this has been going on for some time, but it’s being supercharged now with the identic AI management shifts from supervising work to supervising direction.

ADI IGNATIUS: For this to have maximum value, it sounds like these identic AIs will have far more power to make decisions and complicated decisions that really multiply the number of Don’s who are out there or Adi’s interacting with people, making decisions. And that sounds great in terms of efficiency. It also, at least at this stage, seems scary. What if it makes a decision that actually is a misinterpretation of what I had wanted or what my company wants? Are we going to get to a point where we’re truly going to let AI make complicated decisions valued at millions of dollars that affect us, that affect our companies? And do we need to worry about checking or is it going to be fine?

DON TAPSCOTT: Well, no, it’s not going to be fine. It’s really delegation. It’s a good question. We’ve always delegated certain things to other people including making decisions. We have signing authorities and so on, but we have checks and balances and ways of ensuring that good decisions get made. And the same is true for an agent. Only you’re not delegating to someone who’s newer in the workforce or who’s a subordinate or something like that. You’re potentially delegating to an infinite number of vice presidents who have an IQ of 1000.

ADI IGNATIUS: What could go wrong?

DON TAPSCOTT: Well, I don’t know. I think if someone’s got an IQ of 1000 as opposed to 115, chances are they’re going to make some good decisions as long as you train them and equip them. And as long as there are checks and balances.

ADI IGNATIUS: Well. Let’s talk about training. What does training mean then? I think anybody listening to this is going to think, “Okay, I see the potential, I see the risk. I see the ethical considerations and risk.” So what does training mean? How do we make sure the parameters are such that this is doing what we want and will not do what we do not want?

DON TAPSCOTT: You’re not just training an individual, you’re training your agent. And it’s the same. You’re not just hiring an individual, you’re hiring someone with the capability. So what are you going to look at their resume, their experience? Well, maybe someone with a highly developed agent who’s new in the workforce can perform better than a seasoned executive. So the management of our agents, not just their training, but their review cycle, their accountability systems, their overall directions, their shaping, equipping with the values that you and your organization care about, these are going to become new, powerful, critical central elements of management that just don’t exist today.

ADI IGNATIUS: I find more and more companies are hiring for critical thinking, for agility, for adaptability. The same is true with education, but within companies, how do you make sure you maintain your own agency for one, but also how do you keep your skills sharp as you hand off more and more decision-making to AI? Is this going to make leaders lazier or somehow keeping them sharp? And this may be a question of faith rather than evidence at this point.

DON TAPSCOTT: Let’s step back a sec. During the period of the internet, we outsourced storage of information to the web. You didn’t have to remember the name of that street because you could look it up on GPS or that date. Well now we’re not just outsourcing storage, we’re outsourcing thinking. Am I going to think about this or am I going to let my agent think about it? So does that make us lazy thinkers or does it radically enhance our capacity to think much more deeply as we interact with this powerful thinker to come up with something that’s even better?

And you used the word skills. I’m not sure that’s a good word now, and this also gets into a whole chapter that we wrote. What do we do with the education system? What does lifelong learning look like? I remember my dad graduated, he had a career, he was set for life. Well, now, young people coming into the workforce today are set for 15 minutes. And you’ve got this superpower that’s going to help you learn throughout your life. So the purpose of education and of learning is not really the development of skills in my mind, it’s the development of these underlying capabilities like critical thinking, having good BS detectors, because Lord knows there’s so much BS and that’s about to increase exponentially. What is true and what is not?

The ability to collaborate, to see the big picture, to see the interrelationship between things, your passion for not getting lazy and for doing research and learning lifelong. These are the kinds of capabilities that we need to develop in order to be able to manage having a superpower.

ADI IGNATIUS: So it changes in many ways the concept of management or certainly middle management, where there are people who have had fantastic careers executing, as you say, which is less and less important and executing efficiently. That’s what they do. And if you tell them, “No, no, the bot can do this and you should think deeper,” I’m sure a lot of people are like, “Okay, what am I supposed to be deeply thinking about?” So I feel like this changes management. It probably changes the very nature of the corporation, how we’re structured, how we manage one another. This is a big deal, and as you hinted, we have to adapt the educational system finally to keep pace with all these changes.

DON TAPSCOTT: Well, let’s take those two. Management and middle management. And what does it mean to the architecture of the firm? You remember Drucker?

ADI IGNATIUS: I do indeed.

DON TAPSCOTT: The founder of management science, refer to middle management as, “The boosters of the faint signals that pass for communication in the pre-information organization.” And that’s really that notion was built into the concept of hierarchy. That hierarchy existed to collect, to amplify, and to relay all these signals. Well, identic AI eliminates these signals and information becomes direct, continuous, it becomes contextual. And when individuals and teams have agents that surface relevant insights in real time, the information rationale for layers of management disappears. I don’t think it means that management disappears altogether, although a lot of management jobs will and already are. It means that the rule changes. It’s again, less about supervision and coordination and more about judgment, about governance, about accountability, but also about harnessing the superpower to contribute to value creation in the firm.

And so this is a whole new area of management thinking. You think about management science, we’ve got these publications and we’ve got, I don’t know, hundreds, probably thousands of business schools and millions of books and this whole discipline that we’ve developed just based the concept that we have these human beings that work within a firm that has boundaries and structures and so on. Well, I think all of that is about to be turned on its head. Middle management didn’t fail, it’s just that its job has kind of disappeared. So it’s got to do some rethinking.

ADI IGNATIUS: I mean, the implication I would think is that many, many jobs are going to be eliminated. You may decide, Don Tapscott is my best employee, and he is creating a digital twin or identic AI, and he will manage that, and it will be amazing. We’ll get him to some exponential power. But a lot of the people who would’ve executed on that are redundant, are no longer necessary. I can’t really imagine an alternative to what I just described.

DON TAPSCOTT: I don’t know if you remember, but in ’94 in the digital economy, it was I guess the first big book about the web. I was wondering, is there an economist that can help us understand all this? And I came across the work of Ronald Coase, who wrote a paper, I don’t know, 80 years ago, and he asked a deceptively simple question. He said, “Why does a firm exist? If Adam Smith is right, and the market is the best mechanism for organizing people and resources and money and information. Why isn’t everybody an independent contractor at every step along the way in production?”

And he said, he won a Nobel Prize for saying this. “The answer is transaction costs.” And he defined these broadly, the cost of search, finding all the right people to do something, the cost of coordination. Imagine getting all these people together to create a microphone they’d never met. And so on. The cost of overall of building trust, of search, of basically transaction costs.

And so he said, “Well, we bring these inside the boundaries of a firm because the costs are lower there.” Well, we had these different waves of technology. First was IT in the boundaries of the corporation, came more porous, and then there was the rise of the internet.

But the core structure of the firm has remained somewhat stable today. Well, now you’ve got AI coming in. Think about the implications of AI and related technologies like blockchain on search, finding the right people, the right information, the right designs, and so on. The cost of coordination, we have these agents that are now working together to do things. So I think that these transaction costs are being devastated in an open market. And so we’re going to see radical new models of the firm emerge.

Now, blockchain brought about some of these that DAO the Decentralized Autonomous Organization, which are basically, in many cases they’re organizations without a traditional management structure. And that use tokens as mechanisms for incentive rather than the traditional command and control. But you add in AI, and the reason that becomes important is these DAOs function on something called a smart contract, which is basically a contract and agreement between people made in software. Well now think of what AI does to the smart part of that smart contract.

I think we’re into a period where we’ll finally going to start to see radical new models of how we orchestrate capability and society to innovate, to create goods and services and to create value.

ADI IGNATIUS: One of the many unresolved questions for this future is who owns the AI? If I have an identic AI working for me at Harvard Business Review, what happens when I leave to go somewhere else? Can I take identic AI with me? So this reinvents the concept of the very person. What are the rights and responsibilities that we need that we have? What do we need to work out as these become commonplace? I mean, I know this is not resolved yet, but is it clear to you what needs to happen?

DON TAPSCOTT: The biggest question for me is who’s going to own digital Adi? Mark Zuckerberg? Google? This is an extension of you and your intelligence. And if they own it, that’s a big problem. You just think about something like product placement. You’re watchin g a movie right now and it’s got a Coke or something in it, and there’s a subliminal impact. Well, imagine if you can place that in your extended intelligence. Or forget about a product. How about placing an idea or a political point of view? We argue strongly that no, identic AI needs to be self-sovereign. We need to own our own superintelligence.

And so how does that work for a corporation? When someone leaves, what do they take with them? There’s an AI engineer named Harper Carroll, the chief engineer at a company that was sold to OpenAI. And she had some very useful framing that when an employee leaves a company, they lose access to internal systems today, but they don’t lose the knowledge and judgment they’ve developed. And the same principle could apply to identic agents. The agent retains the patterns and skills and the knowledge about the individual, but it loses the proprietary data that’s unique to the company. And that distinction between personal cognitive development and institutional knowledge, which exists today is where future management frameworks need to land.

ADI IGNATIUS: So how do we cope with some of the more fearsome elements? It’s not hard to get to a very dark place where AI is overruling mere human notions of what the technology should do. And I don’t think it’s fanciful to imagine the dark side where the really bad consequences because we have not anticipated or we’ve not created guardrails. What do we do about that? I feel like we’re moving so quickly and that there’s resistance to having a conversation before we hurdle forward that we might end up in a bad place. In this context, how do you think about the risks?

DON TAPSCOTT: Well, my view is the future is not something to be predicted. It’s something to be achieved. And we’re building a movement really with this book for self-sovereign identic AI. And as an aside, that means a modest little undertaking, reinventing the AI stack. And we think that that’s feasible technically and it’s achievable socially and economically. But I’ll tell you, I was sitting here one night just musing about stuff, and I had some music going on in the background, and this song came on, What If God Were One of Us? Remember that?

It was Joan Osborne. And I started thinking about living in a world where we are not the apex intelligence. So if these agents become so powerful that they become an apex intelligence, then what does that mean?

And if these agents align with us, if they reflect our values, our best selves, then we can evolve into capabilities almost that are godlike. We will see further, we’ll act faster. We’ll remember more accurately. But if they become independent, as we can see a little hint of that in these social networks that are appearing, if they’re capable of setting their own goals, they may cease to represent us and begin to transcend us.

And so the point is that, and I’m reinforcing what you’re saying, that rather than something that becomes the divine, it becomes something that could truly be demonic. How are we going to deal with that? And this is not just the issue of superintelligence that everyone worries about. It’s a new issue that we’ve not come to grips with who’s going to control that capability? And I want to control digital Don, I want to control the extension of me. I want to own it, and I don’t want some platform using it for their purposes rather than mine. So we each need to become aware of what’s at stake here. And the question of sovereignty becomes the central issue of our time.

ADI IGNATIUS: How does leadership need to evolve when we’re going from, we’re still essentially managing people. Yes, there are agents here and there, but we’re essentially, I don’t think very many people have agents as bosses yet. We’re going to go from the standard paradigm to leadership as orchestrating these human and AI systems, having probably AI as your boss, AI as your charge. How do we even think about evolving leadership into this unusual new world?

DON TAPSCOTT: Well, I think leadership is the right term. And again, to go back to Drucker, he talked about in times of stability, you need good management. And in times of change, you need good leadership. And the characteristics of a good manager and a good leader, as we know, are very different. But I don’t think you’re going to be managed or led or report to agents. You will report to people who have a superpower. And you will have people reporting to you who also have an extraordinary capability as well. We still have individuals working with and for individuals, it’s just that their capability is being enhanced, well, almost infinitely in theory at least.

ADI IGNATIUS: So if somebody’s listening to this and thinks, “You know what this, I don’t know if I like it or I don’t like it, but it feels like this is inevitable. It’s the next step toward this ever more capable AI that can create efficiencies, that can help in our work.” How do they get from here to there? How do they start to experiment with, let’s say, these identic agents, these very skilled agents to try to make that part of the workforce part of what their company can do?

DON TAPSCOTT: Yeah. Well, the first thing is to get knowledgeable. You know, it’s happened to me a few times in my life where I thought, “Wow, this thing is huge and nobody knows anything about it.” And I got lucky enough to write the first big book about it. Well, nobody really knows much about identic AI. We still have this view of AI as tools or AI as agents that are out there doing things, but you need to get knowledgeable. I would spend every waking moment learning about this, trying to understand it.

And then secondly, and I’ve always said this over the years, you need to use it yourself. I’ve said this about in the ’80s when they said, “Every manager is going to use a computer. You should try and use it with your own fingers, not your secretary’s fingers.” Same with the web, same with social media. And develop an agent, not as a demo, but in real work. And if you don’t know what it feels like to delegate cognition to an agent, you’re going to learn. And in doing so, you can govern it responsibly for others.

The second thing I would think is that you need to map where augmentation will be uneven, which roles are going to be amplified, which will be compressed, and which will shift towards supervising agents, managing expectations. And we need to start to think about the redesign of management and work so that people aren’t competing against their own tools if you like. I would start to rethink as a manager every different part of an organization, starting with HR, the recruiting, training, compensation, performance, evaluation, and so on. Just go through all of those. We talk about these in the book and how these will change radically. And if I were a manager listening to this, the fastest way to lose control and a lot of people will, or to lose their relevance and lose control to identic AI is to ignore it’s time to get up to speed.

ADI IGNATIUS: Don, this is all fascinating. Thank you very much for being our guest on the HBR IdeaCast.

DON TAPSCOTT: Well, it’s great to chat with you again.

ADI IGNATIUS: That was Don Tapscott, author of the book You to the Power of Two: Redefining Human Potential in the Age of Identic AI. Next week, Alison learns more about the power of positive intent and what that means for businesses. If you found this episode helpful, share it with a colleague and be sure to subscribe and rate IdeaCast in Apple Podcasts, Spotify, or wherever you listen.

If you want to help leaders move the world forward, please consider subscribing to Harvard Business Review. You’ll get access to the HBR mobile app, the weekly exclusive insider newsletter, and unlimited access to HBR online. Just head to hbr.org/subscribe.

And thanks to our team, senior producer Mary Dooe, audio product manager Ian Fox and senior production specialist Rob Eckhardt. And thanks to you for listening to the HBR IdeaCast. We’ll be back with a new episode on Tuesday. I’m Adi Ignatius.

Source link

- Advertisement -

Worldwide News, Local News in London, Tips & Tricks