There’s a good story about Nasreddin who lived some 1000 years ago.

**Nasreddin Hodja:** I promised the Emir (King) of Bukhara that I will teach his donkey to speak like a human being, in 15 years time. He gave me lots of money for it.

**Friend:** Are you crazy? You cannot do it. The Emir will behead you.

**Nasreddin Hodja:** Do not bother. In 15 years from now, one of us will die anyway: the Emir, the donkey, or me.

When will quantum computers become relevant to cryptographers? Highlights from the talk Cryptography for the Post-Quantum World by Brian LaMacchia at Microsoft Research

I am typically doing RSA with 2-kilobit keys, 2048-bit public keys. This is the product of 2 primes of each of about 1024 bits. What our paper showed is that if you had just over double that number of quantum bits, just over 4096 quantum bits available in a quantum computer – and those are logical quantum bits that are stable – you can run Shor’s algorithm on it, and you can factor that 2048-bit number in polynomial time. So, for the types of public key algorithms that we are using today, if we’re talking about factoring, typically your RSA keys are 2 to 4 kilobits in size. And we need double that number of quantum bits, plus a little bit extra. Basically, from my perspective, things don’t get interesting until there’s at least 1,000 logical quantum bits around on a quantum computer, and really up to 10,000 logical quantum bits.

In our world, if it’s got say, on the order of 1,000 to 10,000 logical quantum bits, and you can program it, then it becomes cryptographically relevant.

We don’t have any proofs right now that the quantum-resistant algorithms that we’re all investigating are guaranteed to be quantum-resistant. What we know is that there’s no known quantum advantage. It’s a little bit of a subtle point, but it’s important, that even for the new algorithms that we and other people around the world are investigating, we don’t believe having a cryptographically relevant quantum computer gives you any advantage over having just a cloud full of datacenter servers to help you. But it’s different than saying, we are guaranteed that there is no fast quantum algorithm. That we don’t know yet.

in 2015, NSA, for a decade, had been advancing the use of elliptic curve public key technology as part of a suite of commercially-available algorithms, that they called Suite B, as opposed to Suite A, which were classified algorithms, that they encouraged industry to ship to meet the needs of the US Department of Defense to protect up to top-secret-level information. NSA came out in 2015, and said, “By the way, if you haven’t finished the move to elliptic curve cryptography, you should save your development cycles, because we’re going to tell you to move to something quantum-resistant in the not-too-distant future.” That caused the US National Institutes of Standards in Technology, or NIST, which is the standard-setting body for the United States government, not just DOD, for all government, to launch a standardization process, or a selection process, to come up with new algorithms.

You know those people who work behind the scenes to make sure nothing bad happens to you, and if they’re really good, you never know who they are because nothing bad happens to you? Well, meet one of those people. Dr. Brian LaMacchia is a Distinguished Engineer and he heads up the Security and Cryptography Group at Microsoft Research. It’s his job to make sure – using up-to-the-minute math – that you’re safe and secure online, both now, and in the post-quantum world to come.

Today, Dr. LaMacchia gives us an inside look at the world of cryptography and the number theory behind it, explains what happens when good algorithms go bad, and tells us why, even though cryptographically relevant quantum computers are still decades away, we need to start developing quantum-resistant algorithms right now.

Brian LaMacchia: We still don’t really have big quantum computers. We have very tiny toy ones. But from being able to demonstrate theoretically that if a new fundamental model of computation showed up, that it would change all of our assumptions, that’s yet another example of how we have to constantly think about what an attacker has available, and if the attacker’s resources all of a sudden change, that means they can do more.

Host: You’re a distinguished engineer at Microsoft Research, and you head the security and cryptography team here, which you’ve called the company’s center of excellence for cryptography. What does your group do? What are the big questions you’re asking, the big problems you’re tackling? What gets you up in the morning?

Brian LaMacchia: We are, as I said, the Center of Cryptographic Research and Development for the company. We focus on the hardest problems that the company has that somehow involve cryptography, encryption, digital signatures, things like that. We started a decade ago, as a little cryptographic tools team, looking for places within the corporate research and development group where we could add value. And we tackled security problems and cryptographic problems for what was then grid computing and became cloud computing, our data centers and security problems all over the place. But for the last three years, we’ve been focused on this primary work on the upcoming threat of quantum computers, if they’re successful. But then we also do work on other security problems. We spend a lot of time working on the security of Internet of Things devices, and how do we make sure that devices inside your home can’t be manipulated. We also – I have a member of my team who spends a lot of time on election security and how do you verified voting and what, how can we bring the best in cryptographic research to end-to-end verifiable elections?

Host: Well, let’s do a little bit of a level set as we start here, about the field of cryptography. Can you give us a brief history of cryptography?

Brian LaMacchia: Sure. So, cryptography is the science of data encryption. And it actually goes back to ancient times. We know that the Romans used very simple forms of cyphers. The Caesar cypher was used to send information around. And cryptography, traditionally, was in the military field. And for the longest time, it was what we call, in the field, symmetric key cryptography. That is, if you and I wanted to exchange secret messages, we would agree on a secret password or a configuration of a mechanical device or something that we used to perform encryption. And then I would use that secret to encrypt information to you. You would get the cypher text, the encrypted information, and you’d use that same secret to decrypt it, so we have the same symmetric shared secret key. And of course, in the 20th Century, cryptography started being used more and more to protect wireless communications, right? To protect radio. This is… most famously was used in World War II by all sides to protect radio communications. And your listeners probably all know the story of the German Enigma machine, which was a mechanical encryption device, which was broken. Initial research done by Polish mathematicians, and then it moved to Bletchley Park and the British did a whole bunch of work under Turing and broke the Enigma and therefore learned information in secret about encrypted communications. All of that’s within the realm of the shared key model. And then there was a breakthrough for what was called public key cryptography. And the difference in public key is, each of us who wants to communicate has a pair of keys that are mathematically related – a private key and a public key – and one of those keys you can release to the world. So, if I want to encrypt something to you, I go get your public key and I encrypt it to your public key, but I can’t decrypt it with your public key. You can decrypt with your private key that matches mathematically, and the same is true for me. And there’s a variant of that, which is the digital signature problem, which is, I can use a private half to digitally sign a message that anybody can verify it could have only come from me. And we use both of those technologies today. Every time you open a secure connection in your browser to a website, and that’s an https connection, you’re doing an encryption and digital signature operation so no nefarious characters can learn your credit card number or the email you’re typing if you’re talking to a web email, something like that.

Host: Let’s talk about algorithms. Most people take them for granted, and may even be blithely unaware that algorithms are running their lives right now in many, many ways. And I bet if you asked anyone on the street, does math have an expiration or sell by date, or can an algorithm go bad, they’d just look at you like you’re strange. But you’ve said all cryptographic algorithms weaken, degrade, or break over time.

Brian LaMacchia: That’s correct.

Host: Talk about that.

Brian LaMacchia: Okay. So, unlike many other parts of computer science and computer programming, cryptographic algorithms, which are number theory at their heart, naturally degrade over time as we learn more about how to attack them, and as we assume that our attackers have more compute power available to them. So, we grade algorithms based on security levels: how much work do we think an attacker has to put in to break an algorithm? And as we learn more over time, the security level degrades. And algorithms that we think are okay today are not okay tomorrow. And that’s really important when you’re writing an application or a security protocol or a computer system, to understand that the algorithms you’re dependent upon today are going to have to change, and you can’t just use them for the future. It doesn’t necessarily have a “sell by” date on it, but we are constantly trying to predict what an attacker can do. And sometimes, it’s just more compute power being available. And sometimes there’s an academic result that, all of a sudden, changes our understanding of number theory. I guess the other thing to add is, sometimes we get a prediction of when an algorithm is going to break. Like, we will see a series of work done in academia where the attacks will come along and they will make further and further progress until something breaks catastrophically. Sometimes we don’t get a heads up. I can give you two stories on that, if you’d like stories.

Host: I would. I like stories, and I bet our listeners do too.

Brian LaMacchia: Okay, so a cryptographic hash function is a function that takes any amount of input and hashes it down to a fixed digest size. And for a long time, we used one called MD-5, which was invented by Ron Rivest, a Professor Rivest MIT, the R in the RSA algorithm. And we all thought it was secure. And in 2004, at the annual US Crypto Conference, Professor Xiaoyun Wang from China got up and demonstrated two messages that had the same MD-5 hash value. And you’re not supposed to ever be able to do that. And she did that. And the fact that she could do that meant that the fundamental security property of that hash function was no longer any good. And therefore, we had to move to another hash function, because that one was busted, as far as we care from a cryptographic perspective.

Host: But you didn’t know that going in?

Brian LaMacchia: We didn’t know that going in, but we knew when we heard it, that all of a sudden, we were going to get press questions the following morning. And in fact, Josh Benaloh from my team and I sat at the back of the room and wrote a 4-page Q and A for all the folks back at Microsoft to understand what this meant for our products and services going forward. We transitioned to the next hash function that we had, which we called SHA-1. But SHA-1 shared some structural properties, similarities, with MD-5, and we figured that it would only be time until SHA-1 fell. And in fact, in March of last year, Mark Stevens at CWI in the Netherlands demonstrated a SHA-1 hash collision. And now SHA-1 of course has been broken the same way MD-5 has.

Host: Talk a little bit about how you go about attacking your own stuff.

Brian LaMacchia: Well, first off, we assume that everything we do is out in the open. And this is sort of a fundamental thing for my group now. The algorithms themselves are open and published. The code that we ship is open source and available. And from a theoretical perspective, we assume that the attacker has access to all knowledge about the algorithm and the code and the construction. And the only thing they don’t have access to is the secret piece of the key.

Host: Key.

Brian LaMacchia: Okay, so when we try to attack our own algorithms, we’re hopefully using the same set of information and it’s, how can we deduce the secret key without knowing it? That’s part of the analysis and thinking up new techniques and trying them out and trying to get cost estimates for what’s doable if you have a cloud-computing infrastructure at your back, and, you know, what would it cost to break something of a particular size?

Host: Well, let’s move onto quantum. This is a big topic, and it’s basically what you’ve been talking about for quite a while: life in a postquantum world. And that’s still a ways out, but as they say in the movie industry, it’s coming to a screen near you.

Brian LaMacchia: That’s right.

Host: Maybe not your screen, but somebody’s. And also, maybe not right away. But let’s talk about what quantum computing is. I know we did a podcast with Krysta Svore who’s “all quantum all the time.” And that was her perspective. I want to hear from a cryptographer’s perspective. What is it? How is it fundamentally or materially different from classical computing, and why does it matter to researchers like you, Brian?

Brian LaMacchia: Sure. And first off, I should point out that actually Krysta gave a great explanation of this during her podcast, and our teams actually work together. We sort of dovetail with each other. But quantum computing is a fundamentally different model of computing. And from our perspective as cryptographers, the key breakthrough in this actually happened in 1994. That was when Peter Shor at AT&T Bell Labs invented a quantum factoring algorithm. That is, he demonstrated that if you had access to a big enough quantum computer, you could solve a problem in polynomial time. That is, you could factor in polynomial time, which we do not know how to do today or anything close to that, with classical computers. Now, Peter didn’t have a quantum computer. We still don’t really have big quantum computers. We have very tiny toy ones. But from being able to demonstrate, theoretically, that if a new fundamental model of computation showed up, that it would change all of our assumptions, that’s yet another example of how we have to constantly think about what an attacker has available, and if the attacker’s resources all of a sudden change, that means they can do more. So, from a cryptographic perspective, quantum computing is yet another model of computation that opens up a different line of attack and a different set of algorithms. And for a lot of the problems that we care about today, we know that quantum computers will make the attacks faster. And for some of the types of cryptography we’ve talked about, there are easy mitigations. And for some of the things we’re using today, there aren’t. And that’s sort of what the concern is.

Host: You’ve talked about a “big enough” quantum computer.

Brian LaMacchia: Yes.

Host: Let’s go there for a minute. What is big enough?

Brian LaMacchia: Okay, well for your listeners who might be interested, we actually had a paper that appeared at Asiacrypt last December, 2017, working with members of Krysta’s team on trying to come up with precise estimates for how many logical cubits, logical quantum bits, you need for “big enough.” And what we mean by that is, when I think about how difficult it is to break a cryptographic algorithm, I talk about that in terms of, how big are the keys? What’s the security parameter for the algorithm? So, if I am typically doing RSA with 2-kilobit keys, 2048-bit public keys. That is the module. This is the product of 2 primes of each of about 1024 bits. How long does it take to factor that? And that is well beyond anything we could do with, sort of, all the compute power we have available to us today. But what our paper showed is that if you had just over double that number of quantum bits, just over 4096 quantum bits available in a quantum computer – and those are logical quantum bits that are stable – you can run Shor’s algorithm on it, and you can factor that 2048-bit number in polynomial time. So, for the types of public key algorithms that we are using today, if we’re talking about factoring, typically your RSA keys are 2 to 4 kilobits in size. And we need double that number of quantum bits, plus a little bit extra. Basically, from my perspective, things don’t get interesting until there’s at least 1,000 logical quantum bits around on a quantum computer, and really up to 10,000 logical quantum bits.

Host: And that is what you call a cryptographically relevant quantum computer?

Brian LaMacchia: Quantum computer. Cryptographically relevant. So, in our world, if it’s got say, on the order of 1,000 to 10,000 logical quantum bits, and you can program it, then it becomes cryptographically relevant.

Host: Now you’re going to pay attention.

Brian LaMacchia: And not you’ve got to pay attention. That’s where things get catastrophic for the public key algorithms that we’re using today. Or things get very interesting. Below that, there might be other interesting problems you can solve in chemistry, metallurgy, agriculture, things like that. But what I care about is up in the 1,000 to 10,000 quantum bit range.

Host: Let’s say quantum does make it big and becomes cryptographically relevant sooner than we think. What’s the good news and bad news about a big breakthrough in quantum computing, in your mind?

Brian LaMacchia: The bad news is, it means a lot of systems that we use today have to get upgraded, and that the algorithms have to be replaced. And, pretty much, if you have or if you know that an adversary has access to a cryptographically relevant quantum computer, every commonly used public key encryption needs to be replaced. The good news is we’ve actually got a bunch of candidate replacements. This is work that my team’s doing, other folks around the world are doing. And in fact, the US government is running a standardization activity right now to try to pick some new “quantum-resistant” public key encryption and digital signature algorithms. These are classical algorithms. You don’t need a quantum computer to run them. These are algorithms that run on classical computers, your laptop, my phone… They can run just like RSA and Diffie-Hellman and elliptic curve today. They’re just based on different hard number theory problems for which we don’t believe there is a fast quantum solution. And an important point here is, we don’t have any proofs right now that the quantum-resistant algorithms that we’re all investigating are guaranteed to be quantum-resistant. What we know is that there’s no known quantum advantage. It’s a little bit of a subtle point, but it’s important, that even for the new algorithms that we and other people around the world are investigating, we don’t believe having a cryptographically relevant quantum computer gives you any advantage over having just a cloud full of datacenter servers to help you. But it’s different than saying, we are guaranteed that there is no fast quantum algorithm. That we don’t know yet.

Host: Right. Right. Well, if we situate ourselves in a postquantum world and we’re dealing with quantum-resistant algorithms, who has a vested interest in developing these, and who are the players at work here? You alluded to that just now. What’s the big picture, and who’s all involved?

Brian LaMacchia: So, there’s who’s designing them and then who uses them. And if you think about who uses them, well, it’s anybody who ships an implementation of a cryptographic library or, you know, inside of an operating system or a device. Anybody who’s trying to open a secure channel, a secure communications channel over the internet. You need to able to authenticate the party at the other end, and you need to be able to establish an encrypted channel and send encrypted information back and forth. That’s just common practice, right? And as more and more of our communications are happening on the internet in general, we want all those to be encrypted and private. So, everybody who is involved in shipping code like that, one way or the other, is going to be a customer of quantum-resistant algorithms. Who’s developing it? It’s academic researchers and industry researchers, cryptographers around the world. My team’s currently working on four different algorithms right now, and each of them is an international collaboration where we have researchers from industry and academia participating with us on each of those four. And they’re different sets. And you know, there’s some people that are working on one algorithm with us, and some on another. And these algorithms have different pros and cons, when compared. Some are faster than others, some have smaller key sizes than others. They have different engineering properties. And it’s not clear it’s a one-size-fits-all sort of thing. My guess is that when the US government standardizes these in, hopefully, five years, they’ll actually choose a handful of encryption and digital signature algorithms for different use cases, because what you want to fit into that smart light switch in your phone that you don’t want to be taken over by somebody, is very different than what you’re going to go put into your laptop.

Host: Well, let’s talk about that issue right there, the US government, among other governments. There’s a competition going on that I would love for you to tell us about and what it involves and what the purpose of it is.

Brian LaMacchia: Sure. So, in 2015, NSA, for a decade, had been advancing the use of elliptic curve public key technology as part of a suite of commercially-available algorithms, that they called Suite B, as opposed to Suite A, which were classified algorithms, that they encouraged industry to ship to meet the needs of the US Department of Defense to protect up to top-secret-level information. NSA came out in 2015, and said, “By the way, if you haven’t finished the move to elliptic curve cryptography, you should save your development cycles, because we’re going to tell you to move to something quantum-resistant in the not-too-distant future.” That caused the US National Institutes of Standards in Technology, or NIST, which is the standard-setting body for the United States government, not just DOD, for all government, to launch a standardization process, or a selection process, to come up with new algorithms. And NIST has led two very successful public standardization efforts in cryptography in the past, and so NIST has a history of running these types of competitions. And now they’ve launched this competition. And, in fact, my team is part of four submissions of I think about 65 that made it in and are still active, although some of those have since been broken. And what happens now is we are all approved Round One candidates. And about this time next year, NIST will announce which of those move on to Round Two. And during this time, again, everyone’s trying to cryptanalyze their own and everybody else’s.

Host: Sure.

Brian LaMacchia: And to say what they can learn about it. And it’s up to NIST to whittle it down, and we believe that then there will be a Round Three, and that again, in about five years or so, they will announce some small subset of algorithms that will be approved, some for public key encryptions, some for digital signatures.

Host: To be implemented as the standard.

Brian LaMacchia: As the standard. They will make what’s called a FIPS, a Federal Information Processing Standard, which is an official US Government standard. And then, what, certainly we here at Microsoft, and others have encouraged us to do, is to then take that to an international standards organization such as ISO, and make it an international standard. Because we really want, whatever comes out of this process, that everyone around the world has contributed their intellectual horsepower to, and has analyzed, you know, as much as possible, to become an international standard. Because you need international standards for interoperability.

Host: Absolutely.

Brian LaMacchia: We want everyone to, basically, agree on strong, safe and secure algorithms. So, the US Government standardization is a step in that process, but it’s not the end of it.

Host: And this is all aiming towards a post-quantum world.

Brian LaMacchia: That’s right. This is all about getting algorithms in place so that, if and when cryptographically-relevant quantum computers become real, that we will have algorithms that we will already have transitioned to.

Host: So, let’s talk about that timeline for a second. Realistically, I’ve heard, from you and others, that 15 years maybe, optimistically, 15 years. But why the 15-year workback plan? Why are you working on this now when you’ve got enough problems in a cloud-based world, and all the other things you’ve referred to?

Brian LaMacchia: Well, so that actually is the number I started with 2015. And what happened is I went…

Host: Oh.

Brian LaMacchia: Yeah. I went to Krysta and her team, because we had started seeing these signals, and I said, okay, when do you all think that there’s a reasonable chance that we’ll have a cryptographically-relevant quantum computer? And at that time, they were saying about 15 years, which was 2030. So, I thought, okay, 2030 is a long time away. And then you start thinking about all the things that you have to do between now and 2030 to, effectively, upgrade the internet. Because that’s really what you’re talking about, right? You have to research new algorithms. You have to try to attack them. You’ve got to start a standardization process. You’ve got to prototype them. You’ve got to do test deployments. You’ve got to get them running on your own infrastructure. You’ve got to upgrade all your customers using your software. And then you have to turn off and decommission the things that will be broken. And when I look at how long it took us, as an industry, to do that for the MD-5 hash function after Professor Wang’s break, and I look at how long it took to do that with the SHA-1 hash function, you know, you add the pieces up, you need about 15 years. So, I didn’t think we were actually starting too soon. I think we were starting kind of right on time, and I think we’re still about right on time, if that 2030 number is still accurate. And it’s good to see the progress that’s being made within NIST. But I’m still encouraging people to try to move a little bit quicker and to start taking our own prototypes and start deploying in test environments to see how flexible their software is to handle these types of algorithms. And you can do that today.

Host: So that leads us into the concept of cryptographic agility, which we referred to earlier.

Brian LaMacchia: Yes.

Host: Talk about what that is and why it’s necessary.

Brian LaMacchia: Cryptographic agility, basically, is an architectural principle in your software, that where you use cryptography you do not hardcode in a dependency on one or a small number of algorithms. It’s all about making it very easy to reconfigure your software to use something else, for a number of reasons. But everywhere that you have dependency on a cryptographic algorithm, you want to make sure that you can very easily reconfigure it if, all of a sudden, somebody steps up and tells you that they can break your hash function, you want to be able to quickly flip everything over to use another hash function. And if know that quantum computers are coming, and that we have to prepare for the post-quantum, world, we want to make sure that all of our software that currently uses public key cryptography is designing in the ability to use a quantum-resistant algorithm, even though we may not know exactly what that algorithm is yet.

Host: Or when they’re going to need it.

Brian LaMacchia: Or when they’re going to need it. But we can start making sure that all of our systems have that agility today. And part of the reason that my team doesn’t just do the theoretical work, but we put out these high-performant, constant-time, side channel-resistant limitations is so that we can actually integrate them into the commonly-used security protocols today and show how those algorithms would work, and that’s why you can actually go run the common algorithms like TLS or SSH or VPNs with our post-quantum algorithms in the mix.

Host: Talk about this concept of, “record now, break later,” or as you’ve phrased it, “record now, exploit later.” Why should we be worried about somebody getting encrypted data that there’s no way they can unencrypt right now?

Brian LaMacchia: So, this is a real worry. In fact, it’s another reason why even without quantum computers existing today, you may want to deploy post-quantum right now. You have to assume that if you’re sending sensitive data over a public network, that your adversary – whomever your adversary is – will record that data, has access to the public channel. That’s why you’re encrypting it in the first place. But data storage is cheap. Recording is cheap. So, if you and I are communicating over an encrypted connection, we have to assume that our mutual adversary is recording that traffic and storing it away for the day in the future when quantum computers are real, and the adversary can come back and use the quantum computer in the future to learn about what you and I talked about on the encrypted channel today. Now, if we’re exchanging recipes or something that we don’t think has a lot of long-term secret value, that may not matter.

Host: Well, mine do.

Brian LaMacchia: Okay, well mine don’t, okay? But, you know. But, let’s say that you are a Nation-State, and you’re sending information that’s classified. And those things typically have, I understand, a 30- or 50-year, or longer, time horizon, a security horizon. And it’s not just national government-level data. Let’s say that you’re in the pharmaceutical industry and some of your research is going to have a 20- or 30-year security horizon, because that’s the patent protection on the drug, or that you are in any industry where the information’s got a long security horizon. If the time in which you need the information to be protected is longer than when we think quantum computers are going to show up, you have to assume that information’s going to be recorded and broken when an attacker has access to a quantum computer. And so, your protection horizon is truncated by the appearance of quantum computers if you’re only using classical algorithms. So, if you’re trying to protect data for say fifty years today, you should be using a combination of the best classical schemes that we have right now, and a post-quantum scheme to try to give you some protection beyond the advent of quantum computers. That’s the safest thing. It’s what we call a hybrid scheme, where you use the best classical schemes that we have many, many decades of knowledge about from studying, and add in some new protection.

Host: Well, let’s say that does scare me and I want to have that post-quantum algorithm or quantum-resistant algorithm. Can I get it?

Brian LaMacchia: Yeah, in fact, all of these submissions to NIST, as part of the submission, everybody had to make open source implementation available with their algorithms. In fact, your listeners can go out to GitHub, and they can go download all of our code, and you can go get those libraries today and start using them. And if you happen to be a customer of Open SSL, a very common TLS implantation, or Open SSH, or Open VPN, you can run that today. We even built a nice little demonstration device. We took a little raspberry pie and we turned it into a combination Wi-Fi hotspot and post-quantum VPN endpoint. So, I can take that with me anywhere in the world, and it sets up a VPN to a Linux machine running in Azure. That is my other endpoint. And I can connect wirelessly to the hotspot in my hotel room, and I’ve got a post-quantum tunnel back to the Azure cloud.

Host: And all I’ve got is a Starbucks open, unsecured network.

Brian LaMacchia: You probably want a little bit more than that.

Host: I probably do, but – yeah, I should hang out with you more. Speaking of the things that scare me.

Brian LaMacchia: Yes.

Host: You gave a talk recently that you subtly titled, How to Prepare for Certain Catastrophe. And that’s a perfect setup for the question I ask all my guests, which is, is there anything that keeps you up at night?

Brian LaMacchia: Yeah, so the thing that keeps me up at night is that, say Krysta Svore and her team are going to be successful sooner rather than later. And by that, I mean that we’re going to see quantum computers show up more quickly than we anticipate, that the qubit construction challenges and the scaling problems will get solved by the very smart people working on them faster than we can standardize and deploy defenses. There’s this arms race going on between the quantum computing folks who are trying to build the quantum computers, and the post-quantum cryptographers trying to make sure the defenses are out there before the quantum computing people are successful. That’s what keeps me up at night, but it’s a good problem to have.

Host: How did you wind up doing cryptography research? What was your path to MSR?

Brian LaMacchia: It started as an undergrad at MIT. I was a co-op student at AT&T Bell Labs. And during my junior year, I took an undergraduate class in cryptography from Professor Shafi Goldwasser, who is now a Turing Award-winner for her work with Silvio Micali in cryptography. And cryptography is this weird area of computer science that is taking some of the purest mathematics and number theory and applying it to real-world practical privacy and security problems. And that was my jam. And, at the end of the class, I asked Shafi if she could recommend some people at Bell Labs who were doing cryptography for my next summer assignment. And I was fortunate enough that she pointed me to Andrew Odlyzko, who was… turned out to be my mentor for my master’s thesis. And I did a couple summers and a master’s thesis at Bell Labs, in breaking what were then called Knapsack Cryptosystems, which are no longer used, because we’ve, pretty much, broken them completely. But they were a type of public key cryptosystem that was being studied at the time. And that led to graduate school. Actually, my PhD was in artificial intelligence, and I went back to Bell Labs because they were looking for computer scientists with an economic, legal or social bent to look at public policy computer science research. But the work I was doing was interesting to Microsoft, and I got recruited out into the product teams. And then got recruited into a group to become a cryptographic architect for some work we were doing on trusted computing very early on. And in 2005, I ended up over in corporate R and D, working on this little, what I call a security SWAT team, basically, for one of our former CTOs. And in 2009, we got reorganized into Microsoft Research into this new applied division, and that’s still kind of where I am. And I have a mix of researchers and engineers, you know, developers, program managers on my team. And everything that we do is both about furthering the academic field as well as putting open source implementations of our algorithms and protocols out for everyone else to use.

Host: Right. Well, and that’s a beautiful segue to… As we close, give some parting advice to researchers who are listening to this podcast, potential researchers… What might be on the horizon for them that you think would be good hard problems to work on, from your perspective, in this sort of math-intensive side of computer science research?

Brian LaMacchia: Well, here’s the easy softball one. If there’s people out there that want, that are interested in cryptanalysis, there’s sixty targets, very easy targets, in the NIST competition, for people to go do cryptanalytic work. Because all of these algorithms are under consideration, and the more we know about something, the better. One of the reasons I would not recommend that we just solely move to only post-quantum algorithms today is that none of these algorithms have been studied as long as, say, RSA and elliptic curve-based things. So, that’s why I actually think, for the first about decade of deployment, we’re going to do hybrid schemes where we’ll use both. That probably means you end up digitally signing things with two keys, one classical and one post-quantum. So, there’s a lot of cryptanalytic work there. And I think we’re still learning about leakage, ways in which our implementations on software and hardware leak information that makes it easy to break. You’re not breaking the mathematics. You’re effectively bypassing the mathematics by inferring bits of a secret key through physical properties of the device. And we have to use physical devices to work on this. And that’s a very rich area. Another area that we’re starting to do a little bit of work on, but I think holds a lot of promise, is in formally verified implementations. And I think that’s a very rich area to doing work on within the cryptographic application space.

Host: So, there’s a lot of, still, fruitful areas of exploration and research?

Brian LaMacchia: Oh, absolutely. My team did some work back in 2008 and 2009 on distributed key management. And that’s for, how do you share secrets securely among, say, every machine and every rack in a datacenter, without having somebody plug a USB device into every machine manually? And there are some non-trivial problems in that space. Key management of cryptographic keys is a very important problem, and it doesn’t tend to get much attention as it should, and I think that’s another fruitful space.