The Future of Digital Identity: Fighting AI Deepfakes & Identity Fraud

View Show Notes and Transcript

Can you prove you’re actually human? In a world of AI deepfakes, synthetic identities, and evolving cybersecurity threats, digital identity is more critical than ever.With AI-generated voices, fake videos, and evolving fraud tactics, the way we authenticate ourselves online is rapidly changing. So, what’s the future of digital identity? And how can you protect yourself in this new era?In this episode, hosts Caleb Sima and Ashish Rajan is joined by Adrian Ludwig, CISO at Tools For Humanity (World ID project), former Chief Trust Officer at Atlassian, and ex-Google security lead for Android. Together, they explore:- Why digital identity is fundamentally broken and needs a major reboot- The rise of AI-powered identity fraud and how it threatens security- How World ID is using blockchain and biometrics to verify real humans- The debate: Should we trust governments, companies, or decentralized systems with our identity?- The impact of GenAI & deepfakes on authentication and online trust

Questions asked:
00:00 Introduction
03:55 Digital Identity in 2025
14:13 How has AI impacted Identity?
29:33 Trust and Transparency with AI
32:18 Authentication and Identity  
49:53 What can people do today?
52:05 Where can people learn about World Foundation?
53:49 Adoption of new indentity protocols

Adrian ludwig

Adrian Ludwig: [00:00:00] I think this is where decentralization and starting to think about how to have protocols that are able to have reliable behavior that isn't controlled by anyone is really useful. So there's a lot of focus to get to a point where it's nice if you trust your government, but you don't have to.

It's nice if you trust your device, but you don't have to, but you can trust the underlying system and the protocol to perform management of your identity and give you control over it.

Caleb Sima: I like how you do your best to avoid saying blockchain in every way possible.

Adrian Ludwig: I have no issue there.

Ashish Rajan: Do you want to be Jason Bourne in the world of AI we're moving into?

Yes, you heard that in this episode, I had the CISO of Tools For Humanity, Adrian Ludwig, who has been previously CISO of Atlassian and many other companies as well. And we spoke about the importance of identity as we move towards the world of AI and how your identity as well as the way you authenticate yourself would rely on the fact on validating yes you are an actual human considering all the videos that float around of people who [00:01:00] look like Elon Musk, but are not really Elon Musk.

And a lot more that happens with deepfake and everything else.

Why is identity important? What is the current state of where we are today? And if you know someone who's exploring the possibility of what identity would look like in the GenAI world as it acts as an accelerant, definitely recommend this episode to them as well.

As always, if you have been listening or watching the AI Cybersecurity Podcast episodes on your favorite socials, Whether it's your audio on Spotify or Apple or video on YouTube or LinkedIn, I would really appreciate if you can follow or subscribe on your favorite audio video platform. It really means a lot when you support us by giving us a follow, liking and commenting as well.

Thank you so much for all the support and I hope you enjoyed this episode. I'll talk to you soon.

Caleb Sima: Yeah. Yeah. See, it's a Japanese. No. Yeah. It's like a unique Asian hat that's I can represent the warriors in a unique way.

Ashish Rajan: oh, cool. On that note, welcome everyone to the AI Cybersecurity Podcast recording where we talk about unique things people bring to the table. Hey man, Adrian, thanks so much for coming in, man. I really [00:02:00] appreciate you guys.

Adrian Ludwig: Yeah. Happy to be here.

Ashish Rajan: Maybe to set the conscious, could you share a bit about yourself so people get a sense of your professional background and where you are these days, man.

Adrian Ludwig: Sure. So I'm currently at Tools For Humanity working on the world project. I'm sure we'll get into more details about that, so I won't spend any time on it. Prior lives was at Google for several years and ran Android security, ran security for Nest after that acquisition. I spent a bunch of years at Atlassian, was Chief Trust Officer there.

And if you go way back when was a, I worked on security stuff for the U. S. government. Have always been in a fairly technical role in the security space.

Ashish Rajan: oh, this maybe an odd question, because did you have longer hair before?

Adrian Ludwig: I've had shorter hair and then during pandemic, it got pretty long. My first attempt at a haircut broke my scissors.

And so it grew out pretty long. And then it's now, it's stabilized.

Ashish Rajan: I know this is super odd, but I'm like, I think I saw one of your interviews with when you were a CSO of Atlassian and you had longer hair, I think. Yeah, for sure. And I'm like, [00:03:00] I'm pretty sure I've seen this guy somewhere from but there you go.

Sorry. Small world, but there was some, I'm like pretty sure he had longer hair. Keeping all that, this is more for people who watch the video, but

Caleb Sima: you know, the longer, shorter your hair, the, your voice

Adrian Ludwig: changes. So it's going to gently brush across the microphone. Yeah, but I didn't realize it was an ASMR video,

Ashish Rajan: but in a funny way, it is actually relevant for the topic we had as well, because tools of I wouldn't talk about the company, but I do want to talk about the whole deepfake AI and the digital identity part.

Adrian Ludwig: oh, yeah.

Ashish Rajan: Specifically the digital identity part, which is I imagine a lot of your focus is these days. In the current world where deepfakes and AI, long hair, short hair doesn't really matter anymore these days.

You see every kind of video. Actually, how would you describe the how you see digital identity and privacy at the moment for people and how complex it is instead of just doing username, password, and my passport ID?

Adrian Ludwig: It's been a mess probably since the very beginning. I think identity systems were always [00:04:00] designed to solve some very specific narrow problem.

And then the internet connected all of those narrow problems. And we ended up with a million different identity systems none of which can be connected to one another. None of which provide overlapping value. That's the world of identity. And then on the flip side, you have the world of people who have built applications and actual humans who are online, want to know who they're interacting with, and that has basically become impossible.

It could be deepfakes. It could just be that you never really needed to be very deep at all in a really simple fake. It's just been, I think the unspoken reality is identity online has never really worked.

Caleb Sima: I'm pretty passionate about this topic myself. In fact, I'm giving a presentation at RSA this year, 2025 on rebooting our nation's identity and why GenAI is a forcing function for that, because my belief is very similar, Adrian is like the reason why we have, let's say 350 [00:05:00] plus different identity companies around multi factor, two factor, you name it, scanning driver's license and passports to fingerprints and face scanning is because at a fundamental basis, our nation does not do identity well at all, right? We are based off of birth certificate and social security cards, which by the way, we're never meant to be an identity to begin with.

They don't work in the old world and they definitely don't work in the new world and how if we continue to build off of this broken foundation, like we're going to, it's just not going to work. At some point we have to redo this.

Ashish Rajan: I guess from a corporation or like most organizations we have worked for in the past or currently work for identity over there has primarily been about username password and what kind of role I have.

Adrian Ludwig: Is that also evolving then, to your point about the two factor is like 50 different companies to do two factor pass keys and the list goes on.

I [00:06:00] still remember week two when I was at Atlassian and Paul was his name sat me down and he said, let me describe to you how identity works here.

Adrian Ludwig: And we had a product Jira, and it had a set of structures for how people were organized. And we had another product Confluence, and it had a different set of structure for how people are organized. And then underneath that, you had the fabric of how companies are actually organized. So even within one organization we don't have an alignment on what org structure looks like and relationships look like and how they should work.

So yeah, it's, I think absolutely is a problem. And the idea that they could be unified I think that's where I see world ID getting super interesting. That's the work I'm doing right now is what does it look like when you start to make it possible for these things to be connected and what are the risks and how do you manage those risks from a privacy standpoint?

Caleb Sima: What does the world look like when it starts becoming, yeah, I want to hear,

Adrian Ludwig: yeah, no, I [00:07:00] think you're, you have an identity for the social network that you use. You have an identity for the gaming platform that you use. You have a work identity. You have a government identity.

Your government identity might be split into your taxpayer identity versus whatever services that you use. Independent of that, you have healthcare and healthcare has isolated itself in many ways because it's concerned about privacy. You probably are a student and you have some kind of educational identity.

That's before you even begin get into like the relationship that you have with people that don't have some formal relationship with you. You have a series of identities for those. And the question is, are those all ultimately the same person? Yeah. Do we have to rely on that person to keep all of those things separate and keep track of them and have them be organized and be responsible for that?

I don't think we do. I think we just need to have a way that. The technology makes it possible for them to be connected at the individual. And I think that's one of the things that I'm most intrigued about is like, what happens when you build [00:08:00] a technology that allows for privacy between those various different roles and can you actually unify them?

Caleb Sima: And what does that look like when you sort of, obviously Tools For Humanity is building a solution. What does that solution look like? What's the user experience going to look like? And how's that going to, let's say, enable an easier or better experience?

Adrian Ludwig: Most of the problem, there's so many problems.

I hate to say most of the problem, but a lot of problems come down to things have not been designed to interact with one another. Every company. And I was at first, I thought you were going to say Caleb, but there are 350 different identity systems in the United States. I didn't realize it was just companies.

Caleb Sima: Yeah. Just companies for enterprise and consumer identity.

Adrian Ludwig: oh, wow. Yeah. There's probably a thousand different identity systems but it's because each one of them is a company and they're trying to be profitable. And then the set of sort of things that have tried to be standards are trying to unify across those.

And [00:09:00] that's complicated too. So yeah, it's a messy space for sure.

Caleb Sima: I was during some of the research of this talk, I went through and was like, okay clearly I think that there are other companies that do this much better than we do. Like at the end of the day we have a pretty archaic system on a pretty broken infrastructure.

And, just in researching this, it's pretty fascinating. I did not realize this, but rated number one as some of the most identity and how they do it correctly is Estonia. They have a 99 percent of all of their citizens have a digital identity. 99 percent usage of these chip and pin, blockchain, all of these things are built in across the board in Estonia from birth all the way to death.

And like it's because they didn't have any old infrastructure, right? And they actually went through and [00:10:00] really revamped the way that they did their citizens identity, fraud, crime, all of these things plummeted once they were able to get everything on this system. They have I think over 200 different government services that are instantaneously available online for all of their citizens that you can, it's no longer wait in line with paper documents.

You can just use a mobile app. You can just go online click, and it's done. All of this, because it enables this, just entire changing of identity enables so much access. And so much convenience that happens and you never would have think like Estonia is the one, I think Singapore, which Singapore is clearly on that list.

India actually has done revamps a lot in what they're doing. UAE, a whole bunch of these countries are doing way better digital identity in their nation and their citizens than the U S like we're actually super far behind in terms of what we do.

Adrian Ludwig: [00:11:00] And at a government level, we're almost opposed to being at the forefront.

I think this is one of those where the United States, the focus on civil liberties, the focus on privacy, the focus on independence from government almost runs into direct conflict. I was looking at Real ID which is not an area that I'm an expert, but I was looking at it in the context of trying to understand does Real ID require that you have a digital signature of the content that's contained on your driver's license.

The answer is no. And if you go read the documentation that's provided in the state of Texas, they're very adamant that they'd never want to participate in a system that does that. And that's part of being Texan is that you don't have to share at a national level that identifying information.

Yeah there's a sort of political slash cultural element that will make it very challenging for the United States to have that level of consistency.

Caleb Sima: And what's most entertaining about this is that somehow we've gotten [00:12:00] into our heads that not having like the, our ability to say that information is not verified.

Therefore, I have privacy. In fact, create less privacy for you because. Now you have to fill out all your information to everybody that asks for it on multiple paper forms. Every customer service rep, everybody in the world needs to know all your personal information, where you live, what your age is, all of these things.

When in reality, if you actually had a specific digital identity at which the government then gave you, at which you trusted the government for, nobody else would need to have access to any of that information. However, we think as paranoid oh, to privacy, the government can track me. So therefore I don't want that.

But yeah, when you ask people, do you think you can avoid and evade the government today? From tracking you? No, Facebook tracks you. So I'm [00:13:00] not sure why you think you can avoid getting tracked by the government if they wanted to track you.

Ashish Rajan: I guess it's also important because even people who start work in a new place, they still have to rely or show their government ID, no criminal records.

To Adrian your point, there are companies who start a new employee or bring in an executive, they all rely on this existing information, which is digitally available in some way, shape, or form your university certificate or whatever is a form of identity as well. You rely on the fact the person has never committed a criminal activity based on what the police gives you, but they have a completely different thing going on with identity as well for what you make.

I guess I call it a digital identity. Bringing back to, to what Caleb was referring to as well, GenAI being the accelerant for a digital identity, maybe I would love to hear from both of you as well. What's the context for why is it important now in the GenAI world, obviously it's AI Cybersecurity Podcast as well.

So people are like, great guys, but what's the context with AI over here? [00:14:00]

Adrian Ludwig: Most people think of deep fakes. They think of being able to replicate your identity in terms of visual confirmation of who you are, auditory confirmation of who you are, because we as humans rely a lot on that, right? Intentional or not.

It's been 15 or 20 years since we believed that you could trust text because the ability for someone to fake who you are in written form on the internet, like not even a question. And so the only thing we've been able to hang on to is the idea that we could tell who you were based on your voice or based on your appearance, and now we've lost that.

And so I think that's the sort of key revelation that's happened in the last several years is there's nothing left that you can hang on to that allows you to confirm that this person is who you think they are. Even if it's someone that, really well you could interrogate them about where they were last weekend, but that's already on their social network.

So their answer to that is not reliable. You could ask them who all their friends are, but that is on their social network. So that's not reliable. And I think the [00:15:00] GenAI video and audio component, I think is what triggered for the average person and understanding that we've crossed the Turing test.

We've crossed our ability to differentiate a human from not human. But for me, it's the realization to Caleb's point earlier that everything we do online creates information that can be consumed by a system that can be used to replicate you. And in many ways, our aversion to having robust identity. And our aversion to having positive control around those factors that are part of our identity has resulted in us just leaking that information out into the world very actively across all of our different interactions.

You give your phone number to the grocery store in order to be part of a loyalty program. You give your home address to a bartender in order to prove that you're of age to be able to drink. You don't know that you're doing that. You don't think about the fact that you're doing that, but you've done it all over the place.

And so building an identity system that allows you to protect that becomes absolutely critical when all of [00:16:00] that information can be turned against you in a very precise, very effective way using GenAI.

Caleb Sima: And, I've always thought about it as like when you come to a court case where video or photographic evidence is showing a crime being committed.

And you have to have a court case to figure out whether this person is guilty or not. Is that video or photo faked or not? And then the question becomes you, it's easy, you can, there are telltale signs. There's ways you can identify that it's GenAI fakes. And then there are two problems with this.

Problem number one, which is the most obvious is that the ability to create authentic real videos and GenAI be able to replicate that to look like not GenAI with any other signatures is absolutely feasible for sure in the next couple of years. Problem number two, everything that we do will be GenAI enhanced anyways, right?

Every video we record, every iPhone that [00:17:00] records will have automatic GenAI enhancements and modifications to it. So therefore, how do you tell what's originally real versus what's not real when their GenAI traces all in the video and photos anyways by default? So then it comes to this question in this court, which is how do we prove in this video that you did or did not commit a crime?

Is if the defense claims that it's fake, how do you prove that it's fake or real? And this now comes to this place that says you can't just do blacklisting. You can't just detect what is bad. The only way to do this is to prove it is true. And the only way to do that is to somehow say this video came unaltered or the alterations that did happen to this video were tracked to an actual iPhone firmware and hardware from a phone that was signed by Apple.

And any modifications to that file has an authentic digital trail that shows that this is an authentic video. And in [00:18:00] order to prove authenticity, you must prove identity, because then Apple has to have an identity that has to be trusted and traced to this piece of material to say, this is authentic video. When these things start happening, when law and legal comes into play, that's when forcing functions happen that start saying, hey, we can't do this, right? Apple needs to start signing their videos. Android, Google, whoever these have to start doing, start having to say, we need to know what is truly coming from a real camera versus something that has just been created.

And I think this sort of nuance of proving authenticity means you have to prove identity. And if companies have to do it, then people will have to do it.

Adrian Ludwig: Are you, I assume, just based on what you're describing, you're familiar with some of the work that Adobe and Apple and Google and all of them done on C2PA trying to have that identity and authenticity associated with those pieces of content, which is super interesting.

I think there's a second identity there that [00:19:00] is it important to recognize, which is the person that's involved in that video capture if there is one because certainly having an identity associated with the software that's running on a piece of hardware that's doing image capture is critical. But then where Adobe got interesting was, ah, that workflow for that piece of video is probably going to go into their tools. There's probably going to be modification of it because that's just normal workflow. It's constantly happens. But knowing what that modification is and being able to trace those changes is critically important as well.

So I think you need both, right? You need, what's the software stack that's involved? And then you need to know what are the humans that are involved? Because either of them can cause change. And you need both of those identities and that's for me where it's taken us sort of 30 years to get to the point where most of the software that we're using has some identity associated with it.

Most of the hardware has some identity. There are concerns about where to expose it and how to expose it, but none of the humans have identity. Yeah. And I think we need [00:20:00] all of those things in order to get to a point where we have authenticity in the future. That's a very interesting.

Caleb Sima: Yeah. I imagine to me, it feels like the companies is the, to your point, Adrian, the easiest step to make this happen, because once the companies start doing this then we know that the model can be achievable and work.

How does then this transfer to individuals? And where does that work where they get into this chain? But either way, we know this is inevitable, right? There is no way that we're going to continue down this road for that much longer. At some point, we will, no matter what I feel Texas says, like there's going to be a change in the way we do things.

And how those things are done.

Adrian Ludwig: So then the question is. Is it companies? And that's the other thing that I think is a little worrying, right? Companies are motivated purely by profit. Governments are motivated by other interests. And I think this is where decentralization and starting to [00:21:00] think about how to have protocols that are able to have reliable behavior that isn't controlled by anyone is really useful. So there's a lot of focus on our effort on world to get to a point where you don't have to. It's nice if you trust your government, but you don't have to. It's nice if you trust your device, but you don't have to. But you can trust the underlying system and the protocol to, to perform management of your identity and give you control over it.

Caleb Sima: I like how you do your best to avoid saying blockchain in every way possible.

Adrian Ludwig: I have no issue there. Cryptography is also fine. You just have to add the APHY at the end.

Ashish Rajan: Both of them are valid points. I think as you guys were talking about this, the first thing that came to mind, like the entire identity system we have today is based on the whole trust and transparency thing, right? Trust. Yeah. Even though many people may not trust the government, but they still trust the fact that my birth certificate is real.

And yes, I was born on whatever date and month and time and place and location. And [00:22:00] yes, my driving license address is the right one. I've kept it updated. Transparency because it's the government. They are funded by the public. That's why they should technically or ideally do things that are in the benefit for the, for a normal citizen.

In this world, we are moving towards to what you're saying as well. Adrian in terms of yes, Texas may have that rule. Yes, widely speaking, there's a huge divide between how many people even trust the government because it's now there's a lot more awareness for this earlier people were, I guess it is. It is what it is.

I just get a birth certificate from the government. That's what defines our birth. But nowadays, because of, the maturity, the society has had, we live in a world where now there are people asking for who do I really trust in this? And I guess. My, in all the conversation we just had right now, my number one thing was, I don't think I'll trust a company.

I would still rather just go down the path of at least trusting some form of a government body, even though there might be a little pinch of salt. Is there anyone who is, and I [00:23:00] guess to Caleb, you mentioned Estonia and the other countries who are doing this already, like with AI being front and center for a lot of things.

And I, I hate saying blockchain over here, but I was going to talk, bring up a blockchain and sounds like blockchain is the only place I can trust, which my own digital wallet,

Caleb Sima: Can I maybe throw some controversy into some of this which look like ultimately, Adrian, I'll have you comment on this later, there is a true open system at which you don't have to trust in a particular entity or individual to make things work. That is definitely both available and is probably the right answer. However, what's that blockchain? However, what I would like, Ashish, you said something, which is I would never trust a company.

And I guess my push on that is you do. Yeah. You trust companies every day. You trust Google, you trust Apple, you throw all your information into these things and you trust them [00:24:00] through for most of your life, actually for many critical things that you do. And to me, there is there is the reality of what you do every day and what you trust every day.

And then there's the let's sit down a philosophy discussion and talk about how I need rights and privacy and freedom and how I'll never trust these people when in reality, you actually do it all the time. And so to me, it's okay, fine. I hear you. I understand what you're saying, but let's be clear.

If you actually did truly create an Estonia level, level of identity system, the amount of friction that gets removed from your life is phenomenal like meaningfully material friction from never having to log into a website to never having to stand in line at a DMV for the U. S. residents like there, there is never having to get on a phone call with a [00:25:00] support person and having to repeat and give you all their information and go through all sorts of scanning of IDs in order for you to get access to an account, right? All of this stuff vanishes if you do it well. And to me, what people just seem to understand that don't understand is.

Hey, there is a world that's amazing in front of you, but I'm going to be stuck in this. I'm, privacy and, oh the government can track me and be like, come on, man, you trust all your stuff to Google, all your stuff to Apple, you stuff, all your stuff to Facebook and Meta, let's be clear.

And the government can track you. And if you are a targeted person. Unless you're Jason Bourne, I really doubt you're going to be able to run. And the fact that you don't have a digital identity is going to be the thing that saves you. That's not where this goes.

Ashish Rajan: Which is not true for most people today. In today's day and age.

Caleb Sima: Yeah. Yeah. But I love how people sit there and like spout, oh the government can't track me. Come on, man, Verizon and AT& T have more information about you than [00:26:00] your mom and Walmart, Home Depot for Texas.

Adrian Ludwig: It's certainly the case that you trust all those parties.

The question is, do you trust them with everything? And I think there's also a question of, do you even have any idea what it is that they have visibility into? And that's the part that I get caught up on is when people are surprised about what's known about them. And where they have a misaligned expectation about what's known.

Does your cell phone company know where you are every moment of every day? Yeah for sure. Do you even know who your cell phone provider is? Do you know who your cell phone provider is when you travel to a different country? Because they still know where you are every single moment of the time when you're in that country.

And like that, I think. It's like to Caleb's point, like we just lost sight of how much of that information is out there. It doesn't mean it's not, and it doesn't mean you don't trust them.

Caleb Sima: I have this this phone device is called Cape, which is [00:27:00] this startup company that this is basically a hidden invisible cell phone for exactly what Adrian talks about.

And It does. Yeah. So it rotates IMEIs on whatever system you want to set it on, makes it so that the actual phone itself cannot be three dimensional positional located. Doesn't know who it is. Doesn't have a name associated with it. So this is a beta testing advice device that I'm using for this stuff.

Ashish Rajan: Is this, is it actually in beta or are you just basically testing it out?

Caleb Sima: Yeah, I'm testing it. I'm testing it out. They've got, they sent me this oh shit. Like the other whole box that yeah, like a really fancy box with this thing, like shows live

Adrian Ludwig: Live unboxing video video. I'm very excited.

Ashish Rajan: Could be unboxing videos of where people end up watching the video, but I guess you, to your point I thought you can't rotate those numbers even if you do rotate the numbers.

Caleb Sima: oh, yeah. You can. Yeah.

Adrian Ludwig: [00:28:00] Are you jumping from one carrier to the next to do that?

Caleb Sima: Yeah. Or they the way that I don't know all the technical details per se.

It'd probably better to talk with one of these guys. We should get Ashish. We should get one of these guys on the.

Ashish Rajan: Yeah, for sure

Caleb Sima: on the thing. But yeah, like it's they created their own virtual private network, right? That you that then partner with these networks and your phone is not attributed to you is completely anonymous, rotates IDs constantly.

You can set it to have GPS location rotation. So if you go into certain GPS areas, it will automatically rotate or disappear. Or when you're in certain ones, it'll stay static. There's all these things that they have said in here to basically make it so this phone effectively becomes like a burner phone, but it's not right.

Non traceable, non trackable device.

Ashish Rajan: But Cape knows where you are.

Caleb Sima: Cape does not know where you are. They only know a device. They do not know who the device is associated to. They have no [00:29:00] idea where specifically it happens to be. None of that is tracked by them.

Ashish Rajan: But you do have an ID with them. You'll have warranties with them. There'll be like all of these other things. I think those are disassociated with the actual network communication.

Adrian Ludwig: And you have a configuration of which things you consider to be most sensitive and where you want to spoof something, which gives away some fairly interesting metadata about you as an individual,

Caleb Sima: But again, going back to the point of what is your cell phone network know about you?

They know a lot. They know a lot about you. There are companies that, like these guys that are trying to

Ashish Rajan: maybe going back to what he was saying about the trust of the company was vs trust in the government part for if you are moving to a future where authenticity of an AI video or voice mode would be proven by some kind of authentication that I'm actually human and it's not a modified AI talking.

Maybe is that how we ended up in 350 identity companies? Because we thought, Hey, if money is the motivator for identity company, or is it just, [00:30:00] I'm trying to figure out as we're talking about this, how do we maintain trust and transparency with AI today? It sounds like there is obviously, the work you're doing on trust Tools For Humanity is trying to work towards it.

Estonia, India, UAE, they're all working towards it.

Adrian Ludwig: We started with something that was really basic, which is what's the absolute minimum that you need to deal with generative AI being ubiquitous across the internet. And the answer that we came up with was you need to know that it's a person and there's no digital signal.

There's no records based signal that gives you that ability. So it has to be constructed out of the real world, which is where biometrics and the use of the camera to collect to take images and then generate a biometric that can be privacy preserving is where the discussion began. And then the 2nd attribute that we discovered was really important in the early explorations was you need to know if it's the same [00:31:00] person and usually what that means is the same person using a service over and over again authentication but there's a separate problem which is you can't have a single person control thousands or tens of thousands of identities where you think they are discrete people because that ability that AI has is just too manipulative.

And so the phrase civil resistance gets thrown around and that's what we're looking for, right? Is can you get to a point where you have ground truth, physical reality, And that you're able to use that ground truth, physical reality to differentiate at the scale of everyone in the world. And that was the sort of core question is can we build a system that could do that?

And then you get into the interesting question of, okay, if you have built that system, what are the security and privacy expectations that you have to put in place? What are the protections that you have to put in place so that system in aggregate will end up being advantageous? Because there's definitely folks who rightfully ask questions about how are you going to handle the data?

How are you going to hold the data? How are you going to protect the data? What are [00:32:00] the implications of the same identity being used for lots of different services? How are you going to prevent those problems? And so a lot of our work is A, initially, can you get civil resistant unique humanness, proof of humanness?

And then B, how can you do that in a way that is as secure and private as possible.

Caleb Sima: So Adrian, you got to tell us the answers,

Adrian Ludwig: the answers. It's hard.

Caleb Sima: Yeah. Like how does Tools For Humanity get us both that sort of basic identity and also how is it, how do you keep that, from happening multiple, the owning multiple identities?

Adrian Ludwig: Oh, there's two parts to it.

The first is that they are two different problems and they should be separate. And so the mechanism of proving that you are a unique human is what I think of as the enrollment process, getting to the point where you have a cryptographic identity, and then later is authentication and connection to services.

And so we've split those apart and they're not connected. And that I think is a key [00:33:00] insight that I don't even know if the team fully understood how important that differentiation was as we were defining it. But it ends up being really important. And blockchain gives us the ability to connect them in a way that's reliable and trustworthy.

Just to use that word because I know you want to use it.

Caleb Sima: So if I understand this correctly, what you're saying is, Hey, there is a big difference between identity and authentication, right? Authentication being a process and identity being a thing.

Adrian Ludwig: Yes. Yeah. And the identity that we want is an anonymous identity.

It's only that you are a human. It's only that you are a unique human and nothing else. And in some instances, you also want to prove that you are the same one. But in many instances, you don't need to prove that. So if what you're doing is just getting a voucher, or if what you're doing is you're playing in a game online, and you don't have a persistent identity in that game, you don't need to have persistence in either of those two situations.

You just need to prove, [00:34:00] you are human and not a computer.

Yep, and that's sufficient.

Caleb Sima: This is anti bot, right? So some say, yeah,

Adrian Ludwig: yeah. And so that's sufficient for a large number of the use cases that are relevant in a GenAI future. And then there are some situations where you need to have a persistent identity over a long period of time.

And we're also working on building that. And it turns out zero knowledge proofs work really well for some of these things. So that's on the authentication side.

Caleb Sima: So let me ask you like one, I think one super critical thing that you're talking about is identity and authentication. I think a lot of people, even myself, before I started really looking into this, I get confused behind, today when we think about identity, we think about fingerprints, biometrics, right?

Facial ID, retina scans, actually, there's even Looking at your gate or keyboard, strokes, and these are all proving identity. [00:35:00] But in reality, it's not proving identity. What they're doing is they're trying to authenticate and these being very different. For example proof of identity doesn't require proof of life, right?

So things like gate analysis or typing or heartbeat are all things that are actually more authentication, not identity, because you need to identify someone dead or alive, right? And like these kinds of factors have to be thought of when you do this process, I imagine.

Adrian Ludwig: Yeah. I think the differentiation is big. I always go back to early in my career where I spent a long meeting with a group of cryptographers who just kept talking about identity. What they meant is key. And that cryptographic key was your identity. Everywhere that you interacted in the system, that was your identity.

And then the second question that you were addressing is like, how do you know whether the person who is holding the key is actually the right person? And that's the authentication [00:36:00] process. And in some contexts, authentication is you trust the hardware that the key is being held in. And you have no choice, right? So you have a very small root of trust.

Caleb Sima: That's our YubiKey problem, right? Yeah.

Adrian Ludwig: And so if you hold the token, which is the YubiKey then possession of that is your mechanism of authentication, right? There's the classic you have a token. You, there's a, one way to authenticate is you have the thing, another is, a thing which is your password.

You can demonstrate that, you know it by typing it into a console, and therefore you're able to authenticate. And then the third is, you prove that you are the thing, which is the biometric modalities that you were describing, right? It could be everything from fingerprint to iris to retina to face to palm to DNA sampling.

There's a lot of different potential modalities and they have different implications. And what we realized was we wanted to have an identity. So you had a cryptographic [00:37:00] key and there's one factor of authentication around. Are you holding that key? And do you have access to that key?

And then it's really powerful to have a second factor for authentication. And if you can actually have that second factor be something that's linked to the process by which you enrolled that key, then that's super powerful and that's different from how most systems work at this point.

Caleb Sima: So let me ask you I've always thought, okay shouldn't the person themselves be the key?

Isn't that sufficient enough? Why can't that just be the key? Like, why does there have to be a second artifact at which there has to be a holder of that key?

Adrian Ludwig: So there's sensitivity around just the person being the key. And I'll give a simple example that's useful in the context of legal systems.

You can be compelled to present your body in a courtroom. You can be even compelled to perform certain actions with your body, like placing your finger onto a piece of paper in order to do [00:38:00] fingerprinting. You can't be compelled to speak at least in the U S system, right? So you can't, and at least so far, you can't be compelled to release your password.

So when we were designing this, the mechanism for unlocking an Android device which. I think, even though we didn't have a conversation with them, Apple ended up with a very similar design. We said, in order to decrypt the device, you need to have something that you know. And in part, that's because you can't be compelled to provide that.

To continue to have your device be unlocked, it's much more convenient to use your fingerprint, your path, your face, any of these sort of biometrics that are naturally present at the moment that you interact with them. And those facilities are described as though they're unlocking your device, but what they're really doing is extending the period of time between which you need to unlock it with a password.

So it's in essence, you're not unlocking it, you're keeping it, you're continuously keeping it opened. Yep. Yeah.

Exactly how we described it. Yeah. And you can even, if you're administrator for a [00:39:00] set of devices, you can constrain how long that can be useful for. So you can say you can only do face unlock for an hour or two hours.

Caleb Sima: And then, then your employees get mad at you and pissed off.

Adrian Ludwig: I have a long history of the employees being mad at me. So it's fine. Just be, yeah.

Ashish Rajan: Are we there at the moment where we do have obviously to your point, the two stages. One is the identity and then the key itself. Where do we stand today? As if obviously Caleb, you've done some research on this as well.

Adrian, the company you work for is entirely working on this particular problem as well. Where are we today at this point in time as it stands? In terms of the identity piece to identify, yes, Ashish is a human leave, the key parts separate.

But are we at that point where to, I guess to what Caleb was saying, if the retina, the fingerprint and typewriting is all authentication processes, are we can say that we know what a human would be.

Adrian Ludwig: Proof of person is what you're going after.

Caleb Sima: Yeah. We believe at Tools For Humanity that working with the World Foundation, [00:40:00] we've built something that to a very high degree of reliability provides that it's analyzing physical person.

It's looking at them in three dimensions. It's looking at heat signature. It's doing some analysis of what's going on. And it's both capable of confirming that they are a real person. And then also capable of differentiating using iris code that they are one person that is unique from all of the other people that have previously enrolled in the system and they are alive and they're alive, now not to be skeptical on, the technology that I'm working on, but five years from now, when robotics is far better, will there be attacks on that?

Probably. So for the time being, I think we're ahead of the ability of AI and robotics to spoof a real human in the physical world. But in the digital world, undoubtedly there's nobody that's ahead.

Ashish Rajan: And when you say I guess we're pretty close to it and being able to do this when I know there was a huge, when you guys made the announcements, the first time there's a huge line in the Bay [00:41:00] Area, wherever for people to get the retina scan.

Number four, that globe thing that was there, I didn't want to bring it down, but I wasn't on this. The orb. The orb, that's it. So I guess, so is that the space of retina or did that take all of that information at that point in time and we were able to build up on top of that?

Adrian Ludwig: Like any camera it takes a picture of everything that it can see and then performs some analysis on it.

The camera we have built has multi spectral, so infrared, visible light, and some other spectrum. It's analyzing roughly what you would see in a webcam, and then pays particular attention to the eyes. Yeah, so we built the whole hardware stack. So it's a Linux device. It's got a special purpose built camera.

In particular, one of the things that we wanted to be able to do is take an extraordinarily sharp image with a very narrow depth of field, right? Because you don't want eyelashes that occlude visibility to the iris and a few other things. And it's got a liquid lens inside of it. And it can get a depth of field down to, I think it's 0.4 millimeters. [00:42:00] And it can do it very fast because it's not physically moving lenses. It's just changing the shape of the liquid inside using electrostatic manipulation. And so it can go image this side, image that side, zoom in, zoom out and get a very precise image very quickly.

And then that's combined with all of the other images into some AI models that are running on the device to determine that you're a real person and you're there and that's the goal.

Caleb Sima: So once that, I'm assuming then what that, that creates some sort of private key, right? That you then own or hold that proves that you are a person, right?

Adrian Ludwig: I hate to be blunt, but no. Yeah.

Caleb Sima: Just be blunt.

Adrian Ludwig: Yeah. Yeah. So you have your own private key. You generate your own private key. It's something that you hold on your device. For me, it's on my Android device. You're in San Francisco, so you're, 80 percent chance that you're on an iOS device.

You generate that key, you hold on to that key. And then the iris code is a [00:43:00] unique identifier per person. But it's just a description of the iris. And then that can be provided to a uniqueness service to confirm that it is in fact unique. And if it is unique, then you register your key. And so a blockchain entry can be created.

That is your key that you held. So the key is not derived from the Iris code.

Caleb Sima: oh, so I sign that iris fingerprint. And then once I've done that once it is, it should technically be forever bonded to that private key.

Adrian Ludwig: Now, yes, and then, and the naive implementation, once you've enrolled 1 time, you only ever need that key.

Nothing you know, will ever cause it to be changed in practice, all kinds of other interesting questions. What happens if I lose my phone? How do I deal with backup and restore? What happens if I want to reset it if I want to get my identity? So there's all kinds of complexity there. But the naive implementation, you only need to enroll one [00:44:00] time.

And then every subsequent interaction, you hold the private key on your device, and you can generate a proof that shows that you had been to an orb. And that you are the same person and,

Caleb Sima: And, but to your point, all of the complexity, and I guess almost worry comes from a lot of the situations you talked about.

Yep. I, even today, I, I lose the devices all the time. And if I have the, Hey, like Apple, I use Apple, Apple does the, I've pushed a prompt to your other device. You're like I don't have that other device to prove. So like, how do, like, how do you solve some of these concerns?

Adrian Ludwig: Again, the sort of simplistic mechanism is to provide a backup that is a user managed backup.

Modern operating systems, Android and iOS, give us the ability to do backups on a per app basis. It's an encrypted backup. And so the user can continue to retain custody, but in the context of one of those services. So that's something that exists right now. We're very interested in how to [00:45:00] support the even more extreme situations.

Let's say you have your Google account be lost or destroyed or your Apple account be lost or destroyed, right? You lose all of your iOS devices and you can't get back in. And they won't allow you to do a reset for one reason.

Caleb Sima: Or someone hijacks my device, takes my private key.

Adrian Ludwig: Yeah. So there's a bunch of things that we're in the process of evaluating to help with that.

In order to securely protect the iris code itself, The service that determines whether you are unique uses something called multi party compute or secure multi party compute. And hand wavy a little bit, but the basic mechanism that's used there is the iris code is encrypted on the user's device with several other keys.

Those are keys that are the public key that corresponds to a private key that's held by a third party. And the iris code is not submitted as a single code, it's submitted as [00:46:00] what are called SMPC shares. It's encrypted three times with different keys. Each of those encrypted instances is given to a different multi party partner.

And those partners perform operations. In what's called the encrypted domain and they produce a value and it's only when you combine those values together that you get an answer to the operation that you're asking the question in this way. You can is are you unique, right? And it's a binary.

It's a yes or no, but you only ever get the answer if all three parties perform their computation and then their response is combined together. And so what we have done or really what the World Foundation has done is partnered with third party organizations, so University of California, Berkeley, University of Zurich and Switzerland and FAU, which is a university in Germany.

And those are 3 of the 4 parties that we have. There's a 4th one, which is a private entity, not a university that [00:47:00] provides a backup capability, but all 3 of those have to be online and participating in order for uniqueness to be able to be determined.

Caleb Sima: So let's say I, I lose my phone. What is the process in this instance?

And not saying you have this done, but in the ideal setup, what happens when I.

Adrian Ludwig: So there's a few different ways to think about it. You lose your phone and you just want to go get your account reset. You can go to an orb. oh, you lose your phone and you lose your backups. Yeah. Yeah. All gone. All gone.

We call this the, actually you referenced Jason Bourne earlier. This is the Jason Bourne situation. . I woke up on a boat. . Yeah. Yeah. I don't know how I got here. I don't know my name. Yes. How can I get reconnected to all of my Spotify, my things? Yes. . So one possibility to go to an orb and you say it reset my account.

I don't want to have any of my old identity. I just want to be a human and I want to get a new identity. Okay. So that's relatively straightforward thing to do. The other would [00:48:00] be and this is what we're still trying to figure out is there a way that you can use the iris code as an authentication mechanism to connect to some instance of your identity having been enabled in the cloud.

And what are the controls that need to exist on it? MPC is incredibly valuable in terms of providing a more robust security boundary. It also means that no one party actually has the underlying data. So not only is it harder to compromise. By a sort of hostile party, but it's also more private because there's no one party that has access to it.

In fact, there's not even two or three parties in some instances. So you can imagine an MPC implementation that's much more robust because it's at larger scale. So those are some of the things that we're trying to figure out is there a way to enable that JSON born situation where they can then gain access to their original accounts?

Who would actually be able to answer that question?

Caleb Sima: Yeah. MPC is also, I've that's familiar to me because it's also crypto custody. I [00:49:00] think you use a lot of MPC for those things. And maybe there's a scenario okay, if I, in my life have trusted individuals, if two or three of them colluded to then have the ability to then recreate my key.

Adrian Ludwig: Yeah. That's yeah. So the question is say you're your identity is broken into shares, and then they're on this MPC, then what's what is the authentication and authorization that's necessary to get access to it? It could be. I walk up and I say, here's my iris code. Just give me access to it. It could be.

I have delegated to other parties that have to affirm my identity in order for me to be able to recover it, yeah, or I could opt out completely and I never want to have recovery be possible. And so all of those are conceivably possible and you know need to figure out what's the right default. What are the right set of options that someone needs to have, et cetera.

Yeah. Super interesting problem, for sure.

Ashish Rajan: Because now we have covered both sides of identity and authentication as well. What's needed for [00:50:00] us to get to a future where GenAI is already being adopted by a lot of companies. And we're not talking 10 years, we're talking right now it's being adopted. We are definitely very close to the authentication challenge coming up, probably in a year or so.

We're already talking about the non human identity, there's conferences happening for that. There's AI Summits happening for that. Obviously you guys are the only ones who seem to be working on this particular identity part. Authentication is still being worked on. Where do we stand as in today for people who are listening or probably watching this episode as well and open for you to add into it.

If Caleb, considering you have a talk on this, you had the whole blog on your, the identity part as well as to, is it alive or not? You should have to check that out. For people who are listening or watching this. What is something that they should be doing today? I don't know if preparation is the right word or just wait and see what happens.

Is that what we're getting with this?

Adrian Ludwig: Preparation for non human identities. Get offline.

Ashish Rajan: Jason Bourne all the way.

Adrian Ludwig: Yeah. Yeah. Have you started building a hut yet? [00:51:00]

Ashish Rajan: Life skill planning. Can you survive in a forest?

Adrian Ludwig: No I don't think on a sort of per user basis that there's a. Actually I take it back.

I'll make this my sell pitch. I think we're trying to put infrastructure in place to make it easy to do this. The way that infrastructure works is people sign up. And we're in the process through World Foundation of making access to the orb be as ubiquitous as possible.

There are a thousand locations around the world right now. And so there is a certain amount of ability that you have to future proof by having a world ID. And so I think right now that's probably the thing that someone could do. And then it's incumbent on application providers and service providers to think about, are there ways that I can use privacy enhancing technologies to minimize the amount of data that I collect about people?

And can I do that in a way that helps foster meaningful human connection without having to know a ton of [00:52:00] information about people because it's that gathering of data that makes GenAI so capable of tricking people.

Caleb Sima: Adrian, where do people go to learn more about the World Foundation and everything?

Adrian Ludwig: World. org is not a bad place to start. I think if you're in any of a wide number of cities that we have locations, there's people there that are happy to chat with you. I was just in Jakarta a couple of weeks ago in Indonesia, we've now got 10 different locations that people can go to there.

There are about 20 countries in the world where we have ongoing operations that are, you can go in person as well.

Caleb Sima: Another question myself and others want to get WorldCoin, how do we buy it? We can't get it in the U. S. It seems.

Adrian Ludwig: Yep. As you I'm sure know, there are fairly stringent laws related to crypto in the United States and we don't talk about it. It's that's changing though. That is changing promotion in this country.

Caleb Sima: Okay. But let me ask you, so like a lot of that is changing in the U S [00:53:00] around crypto. We clearly have, a very crypto friendly, government. Yeah. And so do you think that's going to hopefully change in the future?

Adrian Ludwig: There's clearly evidence that there's a desire for change. And so we're excited to see where that goes. I think any company, any person at this point has to be careful to make sure that promises that are being made or futures that are being described actually come to fruition before you act as though they have happened.

And so I think a lot of the types of things that are being talked about right now, they still haven't actually happened. Yeah. The laws that are on the books are the laws that are on the books. And so we're very careful to make sure that in every region in the world, we operate with the utmost of integrity, but also well above the lines that are established by the law.

That's one of the things that we do here as well.

Ashish Rajan: Do you guys think the biggest challenge in all of this will be adoption? And the reason I use that example is so back in Australia, we had this thing going on for some time where you could as a, I think, It [00:54:00] happens in the UK as well.

You could share your medical data with the hospital, like across the board. So entire Australia would know what my blood group is. So if any point in time, if I was admitted into a hospital in any part of Australia, the doctor doesn't have to go through and wait for me to come to some consciousness for what are you allergic to or what is the infection or whatever.

And majority of people, especially the cybersecurity people that I know never opted in for it. And I feel like, is this going to be a situation where cybersecurity people are the ones who are going to be the ones resisting this adoption coming from them? Do you see that? Or is it like an overwhelming adoption and secure. Obviously we're three people who are very favorable to this sounds like, but I think, are you finding resistance? And if it is, where is the resistance coming from or what is it? Is it more on trust and transparency? What are people worried about?

Adrian Ludwig: There are actual valid technical concerns.

And we're going to do everything we can to make sure that all those valid technical concerns are addressed and that we're ahead of [00:55:00] them. I think in practice, the biggest hurdle to any technology adoption is just like awareness. In a sense that it delivers value. And, Caleb did a really nice job earlier describing there's countless instances throughout your day where login systems and identity management systems create friction and create inefficiency.

But we've always just accepted it. And so it's hard for us to imagine how much benefit there could be from these systems. But I think once you start to see adoption by service providers. Once you start to see applications, use them. Once you begin to see that kind of efficiency, I think there will be a sort of snowball effective.

It just. It's a no brainer that you'll do it and that you'll participate as long as those underlying security and privacy primitives are in place, which we think we've got them. Yeah, I think it's just time and energy and it's just going to end up taking a while.

Ashish Rajan: Awesome. And I appreciate you sharing that as well.

I'll actually put the links to the [00:56:00] World Org as well. We can people connect with you, by the way, if you want to talk more about the decentralized world we're moving towards of identity. What's the best place to connect with you and probably learn more from the links you just shared with us.

Adrian Ludwig: Yeah. I think world. org has the most info. It's certainly people are welcome to reach out to me. I don't really have a lot of online presence, but I have a link.

Ashish Rajan: You're on the orb now. So orb knows where you are.

Adrian Ludwig: No, the orb does not know where you are.

Ashish Rajan: But people can try and find you and see where we land.

Adrian Ludwig: But. I'm on Twitter and I have an X account. I just don't ever use it. I'm not on Facebook. I'm not on Instagram. I'm not on. So it's not, LinkedIn, I'm on LinkedIn. You can find me there.

There you go.

Ashish Rajan: Apparently as for Caleb, you may not be on Facebook, but you might be there. You just don't know yet. That's right. It's not the real me, [00:57:00] but I appreciate your spending time with us as well. I think there's a follow up conversation to this, but thanks for coming on the show, man.

.Thank you. Thanks everyone. Thank you. Thank you so much for listening and watching this episode of AI Cybersecurity Podcast.

If you want to hear more episodes like these or watch them, you can definitely find them on our YouTube for AI Cybersecurity Podcast or also on our website www.aicybersecuritypodcast.com. And if you are interested in cloud, which is also a sister podcast called Cloud Security Podcast, where on a weekly basis, we talk to cloud security practitioners, leaders who are trying to solve different kinds of cloud security challenges at scale across the three most popular cloud provider. You can find more information about Cloud Security Podcast on www.cloudsecuritypodcast.tv. Thank you again for supporting us. I'll see you next time. Peace.

No items found.