each year microsoft research helps hundreds of influential speakers from around the world including leading scientists renowned experts in technology book authors and leading academics and makes videos of these lectures freely available and now we're going to have Vinod from microsoft research here at Redmond in case you don't know him yeah here you see is he's going to speak about fully homomorphic encryption from the integers enjoy okay thanks or this talks about fully homomorphic encryption from the integers this paper appeared in your crypt this year and this is joint work at Martin Van Dyke from RC laps and and my ex-colleagues Craig and chai from IBM Watson so over the past two days we have heard you heard a lot of things a lot of very interesting talks and and stuff about the problems of computing unencrypted data from outsourcing computation fully homomorphic encryption and so on and so forth perhaps too much and this talk is going to be about more of the same so let's refresh our minds a little bit the setting that we are looking at is sort of a two-party setting but there's a client Alice who is computationally bounded she's resource bonded she has an input X and she wants to computer program beyond her input X and get the result in the program P is computationally intensive ok so she wants to delegate her computation to a powerful server she wants to do it while preserving the privacy of her in which is a paranoid cryptographer so you know she doesn't want to give the server her input so she wants to protect the privacy the way to do this is to encrypt canonical way to achieve privacy is to encrypt your remote so she interrupts and sends it to the cloud and now we want a magic box which are takes a ciphertext an interruption of X and a program and somehow computes the result of the program on the input X inside the encryption ok so it takes two inputs a program and ciphertext outputs the encryption of the result and assume that you have an encryption with such a magic box you know you can compute this send it back to the to the client she decrypts everyone's happy you know that's good a fully homomorphic encryption is exactly an encryption scheme which comes bundled with this magic box something that takes a program a ciphertext and computes a program inside the ciphertext so we are all cryptographers let me be a little more formal what is yes that's okay I mean people if people have short memories it's not too much stuff okay so you know again this is a refresher I fully homomorphic encryption is exactly like a regular encryption have a key generation algorithm encryption decryption and the extra thing here is this magic box which is evaluate if it is exactly an evaluation algorithm which takes a program ciphertext computes the programming side ciphertext so we need two properties which are critical one is what we will call compactness which says that no matter how big a program you computer on the encrypted data the resulting ciphertext doesn't grow I mean it's not like the resulting ciphertext is as big as a program in that case it would be completely pointless for our application so we need subjects to be small a number two so this is a correctness property on the other hand we need security which is exactly semantics standard guldasta McCauley semantic security okay so this is a fully homomorphic encryption a little bit of history which you already heard from Craig stock yesterday this was defined by this notion was defined by reversed aluminum dr. seuss in 78 so far we knew before Craig's work we knew very limited class afore morphic encryption either you could do addition or multiplication but not both and some you know a little bit of an extension where you can do a number of additions unlimited number of additions but just one multiplication in other words you can evaluate quadratic formulas on an entertainment that's what we knew from before and it turns out that if you don't want this compactness property if you allow the ciphertext to grow with the function that you're computing you have a whole bunch of constructions in fact a recent result also with Craig and shy we show that you can such an encryption scheme from Yale garbled circuit generically so this is so it can be based on a DD h or in Al w/e or any of these you know standard assumptions so really the hard part in constructing home are quick interruption is compatible to ensure that the ciphertext stay small this problem was open for for a while until Craig constructed a fully homomorphic encryption based on algebraic number theory and ideal lattices and and all kinds of heavy machinery the question we will ask in this talk is is there an elementary construction of a fully homomorphic encryption particular I don't want to use ideal a disease I don't know idea lattices so obviously I can't use them and I want to construct an encryption scheme which uses just addition and multiplication okay it has to be you know relatively simple easy to understand that's exactly what what we will show in this talk will construct a fully homomorphic public key encryption scheme which is which uses on the additional multiplication over the integers and the security is based on what we call the approximate and greatest common devices problem or rocks when GCD and this part subset some problem i'll define both in the course of the talk okay so let's jump right into the construction as a construction will proceed in a number of steps the first step is to so it's really the basic building block is a construction of a secret key somewhat homomorphic encryption scheme I'll say what Sam what is in a little bit so it's it's a it's a it's a it's different from what we wanted at the end in two aspects one it's a secret key encryption supposed to a public interruption it turns out that this can be very easily fixed there is a generic transformation that takes a secret key encryption of this form and converts it into a public interruption and the second deficiency is that it's only somewhat home orphic broadly speaking what this means is that you can evaluate a lot of circuits a lot of programs but not every program not every polynomial time computable function it turns out that that can be fixed using techniques borrowing from Gentry like bootstrapping it's costing and so forth so what I'm going to focus on in this talk is the basic building block namely the construction of a secret key somewhat homomorphic encryption scheme I think that's the that's that's that's what I'll describe this talk okay so jump right into it how do we construct the secret key some automorphic encryption the secret key is going to be a large odd number p okay so n is a security parameter let us say n squared bit odd number p the interruption of a bit so i will describe a bit encryption for simplicity but you can extend this to sort of large domains as well to interrupt a baby i pick a random a multiple of P a very large multiple of P so this is roughly n squired bits excuse the multiple q is roughly into the five bits supposed to the secret which is only in Squirrel bit so it's a really large multiple of P that's number one number two pick a random small even number two times are right so pick a random small number are roughly n bits so much smaller than the secret x 2 you get a random small even number and some of these things together so it's Q times P plus 2 times our plus the bit that you want to encrypt this is the ciphertext this is the interruption procedure how do you decrypt well the first thing I do is I know the secret p so I can remove the multiple of pay part from the ciphertext so taken bart b take ciphertext mod p what i get is the remaining part 2 times r plus b mod way all right now the key thing to observe is that this number is so small that taking it mod p doesn't do anything to it remains the same so like taking 10 mar 17 right 10 170 instead so it remains the same and now the bit that I'm interested in is simply the least order the least significant bit of two times R plus B so i take this mod 2 or simply read off the least significant bit i get i get the bit that i'm interested this is the decryption procedure right so far i used a little bit of modular arithmetic a little bit of integer addition multiplication that's all okay so this is the interruption scheme the only thing I have to show you now is how to add and multiply encrypted bits given an interruption of b1 and b2 how do i compute the encryption of b1 plus b2 and b1 times v2 ok so given this you can you can write any program as a combination of addition and multiplication so you can kind of put these building blocks together to compute any any any program that you want okay so this is this photo let's try this actually turns out to be pretty simple so very simple and the idea is that a ciphertext is a near multiple of P and if i add or multiply to near multiples of P I get another near multiple of me it's not going to be as near as you know as closest before it still you know you can set the parameters so that it still remains particulars okay so concretely how does this work let's say we have two ciphertext ciphertext of b1 as I had extra be too right they want to compute a ciphertext that encrypts b1 plus b2 how do i do this i simply add these two numbers as integers right c1 plus c2 and that's simply a multiple of P plus a number small enough number it's a little bit bigger than before but it's least significant bit is the exora v1x or orbital okay in the key thing to notice that if I said the original noise to be small enough then this noise doesn't grow it it still remains a pretty small compared to the secret key multiplication it's the same thing except that the expression gets a little bit more complicated if you write it down it turns out to be a multiple of P again a number which remains small enough although it grows much faster than if you add you're multiplying two numbers it draws much faster than if you had but still it's Lee significant bit is the product of the two bits you're interested in and you can make sure that it stays within within bounds okay so this is how you add and multiply bits and once you have this you can compose them together to compute any program this is what you do so we're happy right i mean everything's great except for two problems one the ciphertext grows with each operation every time you add to ciphertext the resulting ciphertext becomes you know the bit length grows by one when you multiply the bit length gross by a factor of two right so this is not any good because what this means is that the resulting ciphertext offering computer program is essentially going to be as big as program so you know why does the White is the client it's not outsourcing anymore so you want to keep the ciphertext bounded so that's one thing we want to do the second problem is that the noise which is contained within the ciphertext which is this number are the right that I talked about also grows with each operation okay this is also problematic let's see why that's the case a ciphertext you can write it as a point in the number line right so this is all multiples of therapy a ciphertext is a multiple of P plus a little bit of noise two times R plus B and I claim that if if this noise is larger than half the multiple of P then you know your dad I mean the decryption doesn't succeed why is that the case well you're going to first compute the ciphertext mod p right that gives you is the nearest multiple when I same on the operation gives you the nearest multiple of P and and in the distance between this and the nearest multiple of P is some number R prime which is different from two times R plus which is different from what I used for interruption in particular the least significant bit of art prime is going to be something different in particular it's going to be the negation of the bit that I that I encrypted so you know the decryption is not going to succeed anymore so what the lesson we learn is that if the noise becomes larger than half the multiple of P decryption fails so we have to keep it under onto this path ok so these are two issues with this encryption scheme the serious issues right i mean the first issue it prevents the usage of this encryption scheme in totally we really have to handle it the second issue is also from Malik okay so how do good this is exactly what I said how do we solve these issues I'm gonna just sketch the solution right so how do we solve the issue of the ciphertext growing solution is fairly simple you publish in the public key or publish you know in the sky at large multiple of pain a large random multiple of being okay without any noise it's just a multiple of P and after you do every operation an additional multiplication of the ciphertext you take the result mod this this number in the sky okay so what does this give us what is this bias and we ensure that the ciphertext stays less than X 0 because you're taking it modular zero right and so it stays within a fixed bound the encrypted bit remains the same because taking something modulo x0 simply shifts the ciphertext by an exact multiple of P and the noise remains nice remains exactly the same particularly does not increase and it the least significant bit this is exactly the same the encrypted bit doesn't change okay so this is how we solve the issue of the ciphertext growing right and the second issue is that the noise grows with each operation okay so I have two answers to this issue one is that you know that's fine but you can you can actually perform a fairly large number of homework operations already before the noise hits this bound and that might already be useful for for many sort of low complexity you know functions that arise in practice that's one answer but what we were shooting for is a fully homomorphic encryption scheme you're not you're not happy with evaluating small programs we want to evaluate every program okay and this it turns out can be handled by this very beautiful theorem of verve Gentry who shows how to take a somewhat homomorphic encryption scheme which can evaluate large enough programs large enough I mean something that is larger than the decryption and the size of the decryption circuit of the encryption scheme and turn it into a fully homomorphic encryption okay so this is the blueprint that that Craig was talking about yesterday once you apply this you get a fully homomorphic encryption okay so this is the entire construction of the sketch of the entire construction of of the fully homomorphic encryption scheme what I want to tell you about very briefly is is the security argument why is this why is this game secure seems so simple it can't be secure right but you know it is secured under a reasonable assumption and I'll try to convince you of this now talk about the security of the secret key somewhat homomorphic encryption scheme that I presented so the security of the public key version of it is equivalent to to the security of this game because the transformation is generic and it preserves it doesn't add any more complexity assumptions and the security of the final scheme we need to assume that a certain subset some problem remains hard and you know that's it's an extra assumption need to make let's focus on the security of the somewhat home or fake interruptions game okay so what's the what's the complexity assumption that that we're going to use it so it's something we call the approximate greatest common divisors assumption the approximate GCD assumption this is the name that that how grave drama coined and in 2002 2003 i think but the problem was known even before even before that under different names so what's the problem and it's going to be very similar to this learning parity with nice or you know learning with errors problem that christophe and des and Chris will talk about next end and the problem is a following you have a box which contains a large random odd number p an odd number p chosen from this domain 0 to p and you have an adversary who has oracle access to this this box okay so what the adversary gets is the following he can press this button the first time he processes but and he gets an exact multiple of thing a large random number Q 0 multiplied with me that's what he gets but you know he's not happy with that I mean he has so here's this red button I have to push in many many times so you know I'm going to push it as many times to say like polynomial number of times and you every time I do it I get a multiple of P a large multiple of P plus this noise exactly the distribution of the secret key of the ciphertext in our scheme okay so so he gets a polynomial number of number of these multiples +1 exact multiple and our assumption the approximate GCD assumption is that given all this thing and polynomial computing power you cannot find the number that's hidden within this box namely p ok this is the assumption questions ah ok I was wondering white sorry so it's in the easing in the ciphertext the noise term so even and here the nice nice term select random is that so what you saying no so so so the assumption is different from the security so the security of this game doesn't follow directly from the assumption and you'll see it you'll see it in a little bit there are a number of differences between trains too it's hard I'll talk about this a little bit but the security parameters then right so it's a it's the it's a dimension for which lattice problems are hard rocky speaking and the numbers are of size roughly into the five so and I don't know what what the reasonable value of fairness maybe something like thousand and the number sort of slice into the way so it's not it's polynomial time but uh you know the cute 0p is of size n to the 5n to the 5 + + n squared but you know into the fight freely okay so okay so this is the assumption okay so how does this relate to the security of the scheme well the security of the scheme says that given an exact multiple the number of need multiples of P with random noise plus you know bit that you drive on encrypt you can't guess this built his the adversary can't guess this bit given one exact multiple and many sort of approximate multiples so so clearly there is a sort of a syntactic difference between the assumption and what we need for security namely that in the assumption the assumption is that the adversary cannot solve a search problem namely guessing the number p whereas britain semantic security a priori seems much easier he just has to guess this one bit of the noise all right so they seem to be different problems a priori nian and what we will show is that they are actually equivalent in other words if you can break the semantic security you can actually come up with this this entire number p you can break the approximate GCD assumption okay so this is something that we need to show and out and what I'll show in the in the rest of the talk is a sort of an idea of how you how the security proof course a taste of how the security proof course okay good so again what do we need to show we have an interruption breaker which breaks semantic security by guessing this bit and we want to show that that guy solves the approximate de sleep Rock which is a search problem so the proof proceeds in a number of steps the first thing I will show is by a sequence of steps I'll transform the encryption breaker into an adversary that does the following it gets an exact multiple it gets a whole bunch of near multiples of P and gets this this distinguished number which is also a near multiple but I everyone are sort of write it separately and it predict the least significant bit of Q in this distinguished multiple okay so you'll see why I'm writing this in kind of two different steps in there in a little bit so I claim that an encryption breaker can be transformed into this algorithm and I want the following property from this adversary I want the adversary to predict the least significant bit of Q for every ciphertext C for every number C which is of this form okay so so in in some sense that the proof involves a worst-case to average case argument over C so the algorithm that we start the adversary that we started with the one that breaks the encryption only works for a random near multiple whereas we want to transform it into something that works on every near multiple it involves an argument which says that computing the least bit of Q is equal to compute it's the same as computing the least significant bit of our that's actually easy to see because P is odd so the leasing a bit of Q plus the LSB of r is equal to L SPF say so computing one is equivalent to computing the other right and hybrid argument and so forth but you know this is something that we can show using standard tricks okay in other words we can show that an encryption breaker results in an adversary which succeeds in in this game in predicting LSB FK ok the next thing we can show is that to break the approximate GCD problem it's enough to predict cue from Q times P plus R and that this is actually very easy to see because if you can predict q you can divide c by q what you will get is p plus a very small number so you can take that small number off and what you get is be so predicting Q is equivalent to predicting thing okay so what I wanted to show was an arrow between the encryption breaker and up I want to show that the encryption breaker implies the approximate DC solver I simplified my problem by doing these reductions and what I the only thing that I need to show now is an arrow between these two problems right so I want to show that an adversary that succeeds in this game results in an adversary that succeeds in this other game yes artists are small compared to be right so if i can predict q I'm going to take seed and / q so what I'll get the speed which is an integer plus R over Q which is very very small okay so i can take this I can round it to the nearest integer and that should give me p okay so all i need to show is that these two problem is to go from this adversary that predicts lisa can convince 21 the predicts q itself okay so look at the left and right okay so the the the game that the adversary place in both cases is completely the same exactly the same except that here he predicts one bit and there he predicts so the entire number okay so how do we show that how do we show this implication well the main idea is to use this adversary which gives me one bit again and again using different numbers see and make it so this adversary is going to give me one bit the leasing bring bit of Q every time and I'm going to use this bit to make this number Q smaller and smaller make this number C and therefore the number Q smaller and smaller until I get the entire q art okay so that's roughly speaking the idea so how could you do it right so what is you know what's the natural rate to make this work here's what I'm gonna do I'm gonna take my number see I'm going to feed it to the adversary okay and he's going to give me the leasing to convert of Q okay now see that now there are two possibilities one is the dls p is 0 okay which case I'm going to take this number and divide it by two and round it okay so what happens is that i compute q half times P Q half is an integer because I know that the least significant bit of Q is 0 times B plus a noise which is also smaller by a factor far the factor of term okay so so what did i do I made the Q smaller by one bit so that's progress right if LSB is what will I do in the other case if the leasing room it is one I'm gonna take C subtract B from it and then divide by two ok so again I the new C is Q minus 1/2 which is an integer times P plus our half okay again I made the cue the effect of Q smaller by one bit okay then I do this again and again I learned the bits of Q one by one and you know I'm done okay so everyone happy okay so people haven't heard the talk before please okay good so the problem with this blueprint is that to do this step I'm assuming that I know p but what am I trying to find out by running this whole I whether it's p itself right so it's a you know circular and that's not that's not any good so what do we do we're going to replace this step by something more sophisticated and and we will do this by proving the following law you want to keep the overall structure of the reduction the same because it's so nice right you wanna you don't want to deviate from that but we want to replace the second step by something more sophisticated so what do we do we will show the following lemma okay the lemma says that given to near multiples z1 and z2 just Q 1 times P plus R 1 and Q 2 times P plus r2 plus this least significant bit oracle i can compute a number Z prime which is the following quantity it's a multiple of P plus a small noise where this multiple is gcd of the multiples q1 and q2 gcd of the numbers q1 and q2 okay so this is the lemma that we're gonna shop why does this help us well because of two things one if I choose q1 and q2 at random the GCD of these two numbers is going to be one with very high probability ok so what I get here is 1 times P which is P plus our prime which is much smaller compared to be okay so I get essentially p but you know with a little bit of not ok so now I'm very happy because I can take the second step which don't work because I needed to not be and replace the P here by this number Z prime which is essentially as good as big and it's p plus a little bit of noise so it's as good a speak and everything everything goes through okay so this is this is this is what we are going to do and the only thing that remains to be shown is how to prove the slammer this is there so the crux of the crux of the proof so so remember this is the key thing that uh that maybe i glossed over but so I transform the interruption breaker into this algorithm like this algorithm succeeds for an arbitrary see so it doesn't care about the nice distributions it it works for any so this is the worst case tab which case part of day of the reduction which I hid under the under the box but this is guaranteed to succeed for NEC okay and then I don't need to care about the you know the distribution of heroin or not okay so the only thing that remains to show is how do you prove this lemma okay so in a very high levels 50,000 feet the main idea is that you want to run the GCD algorithm the binary GCD algorithm on q1 and q2 but the point is that you don't have q1 and q2 you only have Q 1 times P plus r1 you know some form of encryption of q1 right the main observation is that you can still run the GCD algorithm under the hood even if you're not given the inputs by themselves and the algorithm works just as just fine because the because its nice tolerant in some sense the proof is very similar to the RSA we sort of go 25 years back look at the hardcore bit RSA hardcore bit proof of firm each or gold a container and proof is very very similar to very similar in spirit okay so let me tell you a very high level how to how to show this lemma okay so you're given to near multiples z1 and z2 and your goal is to reduce the queue inside the Z to get sort of one gcd of q1 and q2 because you're given these two things the first thing you do is you compute the LSB of q1 and q2 and this you can do using the using the Oracle that you have right so now there are two cases either at least one of the bi is 0 LSPs is 0 in which case we are in the wind the good sort of half of our security reduction you can divide the Z by 2 and everything goes through the bad case is that both the BIS are one in which case you cannot divide by two you have to subtract p and divide by two and so forth it's in that case we use the fact that if both the LSB sr1 you can subtract the two numbers right so you're essentially subtracting two numbers whose last bit is one and you get a new number who's last bit is 0 now you can take that divided by 2 and n continue okay so after these two steps the point is that at least one of the numbers has an even multiple and you can keep producing on that even multiple and and then everything goes through you what you get at the end is the GCD of the two numbers inside the interruption okay good so that's the security proof for the person that remains to be asked so what we showed is that if you can break the semantic security of the scheme then you can solve the approximate jcd problem so quite another question is how hard is the approximate g3 problem right so this I'm going to wave my hands very vigorously although you can see it so the point is that this problem was studied by how I Graham and also earlier by ligurious coppersmith and gooey and stern under various different names so it turns out that this problem is equivalent to the simultaneous definitely an approximation problem and yeah so what do we know about attacks on this problem the tax a lot is based ligurious himself or an algorithm that solves solves this problem for a range of parameters then there's coppersmiths algorithm for finding small polynomial roads you can approach you can adapt that to solve this problem then there is algorithm and go in and stern and drag gifts orthogonal lattice method and so on and so forth the point is that all of these methods run out of steam when I set the parameters appropriately in particular when the multiple in the site in the bit length of the multiples is larger than Squire of the square of the bit lengths of the secrets so in particular this is satisfied in our case because the bit length of Q is n to the five and the bit length of secret sense quite right these algorithms run out of steam it's all going to stop working when the parameters satisfy this constraint okay so you know this is evidence that that the problem is hard although you know we need to work more to understand hardness of this problem okay so this is all I wanted to say future directions say you know this is the obvious future direction construct an efficient fully homomorphic encryption scheme the one thing that correct mentioned which I want to stress again is that all the homework encryption three home offering encryption schemes all of them use the same blueprint they construct a somewhat home or fake interruption scheme and then they sort of bootstrap use this bootstrapping trick to make it fully homomorphic the question is is it really necessary and they did them the overhead of the whole process comes from the bootstrapping procedures of the question is is it really necessary sorry we should each other three okay well there is a Craig's there's the integer stuff that I used to do and the third one is this this encryption scheme of smart and work out run which is similar to Craig's original encryption scheme except that I would say that it lies between the toast games so the the assumption is still lattice based but the ciphertext are integers so somewhat lies between the two schemes okay so okay so not quite efficient question is can you make a deficient so there are some recent improvements the first reference is Martin breakout and i forgot what sss GH is generally in tulare they implement the scheme and come up with some numbers in performance numbers ok so the second and probably more excitement more exciting for me directions to evaluate the security of the approximate GCD assumption so thing is the this scheme had says nothing about ideal lattices right i mean all we are doing is manipulating numbers adding and multiplying numbers it wait so where is the original scheme explicitly used ideals and inner polynomials and so forth this doesn't scream out ideal lattices at the very least the question is can you use some version of the scheme to construct a fully homomorphic encryption based on you know regular lattice problem so something that doesn't you know depend on idea lattices ideally we want to sort of get a fully homomorphic encryption scheme based on the shuttle select a problem on regular lattices questions can you do this ok so that's that questions yes if it's a key bits it currently well there are not only none of the three billion bits but you're saying and that's more like an 80s it is not good because 80s there so the different security barometer yes the difference agree it is a solid you're talking 10 to 16 bits each number my arithmetic is really bad yes yes something like that day 80 works for the end in this case because the dimension of the lettuce is that you can also serve but also this concept package ok so improvements yes sir Zeila get into my Callista coding search this is good point so so the search her decision for lwe is is it's lista coding essentially reduces uses some of those ideas it is it uses essentially called 11 so well then I don't know function is it the real blue channel connecting a clown question what was the most interesting question of the day right yes any other districts this he's a pleasure to present Chris who came from Georgia Tech know comes from Georgia Tech sorry joint work and be deniable encryption so you can ask questions I'll just make you feel really guilty by the cover about that okay this is by deniable encryption that's a work with Adam O'Neill also at Georgia Tech as of a couple days ago now he's at Texas okay so in the clouds now that we have that end of the out of the way so I want to talk about a totally different concept called deniable encryption which kind of started with some ideas of been a low and was formalized the bed by kinetic neon Ostrovsky in 97 and imagine for the moment that Alice and Bob are now siblings and they're trying to prepare a party for their big brother surprise party so they want to keep it a secret and so Alice and Bob were going to use some encryption and the way they do that is well Alice a bob will have his public key that Alice knows and Bob's going to receive this message from her so she's going to encrypt it by taking his public key and then some random local coins of her own and then she will encrypt the message surprise party and she'll ship it over to Bob this ciphertext but big brother is watching them and you know he he decides using the methods that that big brothers often have to ask them you know what are they talking about what what were you saying behind my back and he can kind of twist their arm basically and say I was tell me the coins that you used to send this this message or Bob you know give me your secret key and if he does that then the the surprise is spoiled so Alice and Bob really don't want that to happen so what they would prefer is if there was some way for them to encrypt the message in a deniable way we're going to call deniable way and so maybe Alice uses the slightly maybe she uses a different encryption algorithm are called deny blank and Bob should still get the message that she intends for him to receive but in in preparation or in case big brother comes along and tries to coerce them then they're able to create some fake coins and maybe a fake secret key so Alice can do this locally and come up with some different fake looking you know fake coins and a fake key so that when Big Brother comes along and courses them you know it looks like something innocuous that they can all agree on so basically the point is that the fake coins that Alice provides in the fake key that Bob provides should look completely consistent as if another message have been encrypted there should be nothing suspicious at all about these coins and keys that's very important yes we hook people yeah we'll talk about that I mean 11 issue is you know how do they know that of course it has to look like a consistent story so the coins here should correspond to the same message as the key as the secret key is going to decrypt so there's this question of coordination so I'll get to that in a moment so you know it's not clear that you can do this kind of thing it seems a bit paradoxical although not outright and if you could do it then there are a lot of nice applications that you might imagine using for it naval they all pretty much come down to this anti corrosion property so a lot of professionals will often have ethical or or professional obligations that they have to give for example journalists may have to promise they're whistleblowers that they won't be able to be coerced into revealing who they are or what exactly that they they're whistleblower delivered to them lawyers you know have ethical obligations to their clients whistleblowers things like that or if you find yourself dealing with a different kind of Big Brother entirely you might want to feature like this another application that has traditionally been in connection with deniable encryption is voting where the idea is simple if you want to vote for somebody then you encrypt their name as as the ballot and you encrypt it deny ibly so that if someone comes to you later and tries to coerce you into finding out who you voted for well you can reveal whatever coins you like showing them that you voted for whoever they you know happen to want you to vote for you have to be a little bit careful when when thinking about this because in voting sometimes a person wants to be coerced and so you know because maybe they'll be paid a hundred dollars for voting for Smith instead of Johnson and so you have to be careful about whether the person doing the encrypting is is trying to be prevented from outside coercion or is willingly able to be coerced so this takes them care and and it depends a bit on what your what exactly you're trying to achieve here and then there's a different application entirely which doesn't a priori have to do with with coercion per se which is secure protocols that tolerating adaptive break-in so this what I mean by this is a deniable scheme in particular has a property that's called non committing encryption and that's something that can be used to design adaptively secure multi-party computation protocols but in fact deniability is is strictly stronger notion than non committing encryption and yet it's one way to achieve it so you have to go back a little ways to find some state-of-the-art the only paper really that I know of in the theoretical literature is the one I mentioned before by kinetic at all and the main the main construction in that paper was what I'll call a sender deniable public key encryption scheme so this means that the coercer could go and force Alice to reveal this the coins that she used to perform the encryption so that's the the main construction involved there and then if you had some interaction and basically flip the roles of Alice and Bob so you have now have an interactive encryption algorithm or encryption protocol then you can get receiver deniability where the receiver can be coerced but not sender and then they get the what i'll call by deniability which is shorthand for sender and receiver can be coerced via some interaction with third parties but the catch is that at least one of those third parties must remain coerced so whoever is in the system there's at least one person who must remain on course and whose coins remain secret okay that an accurate description yeah okay good and then this is also not not just desirable in theory but actually the practitioners seem to care about this problem quite a bit so there this this product called truecrypt which advertises pretty highly on its you know a list of features in the front page that it provides something caused plausible deniability there's an old apparently defunct software project called the rubber hose file system which is supposed to give you some deniability as well what these actually give you some kind of limited deniability they allow you to say that Oh in this part of my hard drive there's i'm actually not storing anything here it's just a gibberish that I kind of Pat it out and there's nothing on this part of my hard drives though you don't you know nothing to look at move along here and this is this is a plausible story I think you can I think you can make the case that it could work you know if you're at a border crossing or something like that it's good for storage but I'm not sure it's so convincing of an argument for communication because if Bob and Alice are sending messages to each other something's going on I mean they're they're speaking I think it's not really plausible to say well I just happened to have sent Bob a bunch of random bits and there was really nothing behind it what's that yeah that I mean that's kind of how truecrypt does it the cipher texts are random looking and they make it they make a huge buffer on the hard drive full of random bits and then those random bits slowly turned into cipher texts that have messages underneath them but who knows where that stops and where the noise begins so that's that's what that's roughly how they work ok so that's that's kind of the state of the art here let me tell you now what with this work what we're able to accomplish in this work so we get again what I called by deniable encryption which means that the sender and receiver can can simultaneously be coerced by Big Brother and in the most general case they can reveal any message underneath the ciphertext which can be chosen as late as the time of coercion or the moment maybe just before coercion so that's the the most general thing that we can achieve I just want to point out that it is a true public key scheme so there's no interaction no third parties it's just you know Bob Bob publishes his public key and then Alice can send a non-interactive message by encrypting to that public key it's going to use some special properties of lattice based encryption schemes won't be until the last slide that will actually see a lattice but we'll kind of see abstractly what it is about these encryption schemes that make it possible one downside is that the scheme in its full generality has very large keys so the secret key in particular is large and the public keys are accordingly as large unfortunately this pretty easy information theoretic argument says that this has to be so if you really want to be able to reveal any message chosen at the time of coercion so the point is that if I've got a public key and then there's a ciphertext and I want to be able to reveal a secret key that could be of any you know 2 to the K possible messages then there have to be at least 2 to the K possible secret keys that can reveal because decryption is going to have to work and decrypt that fixed ciphertext to any of 2 to the K possible messages so I need two to the K possible keys that means there are at least the keys or of length K so large keys downside its inherent but you can get around it and the way of getting around it is something called plan ahead deniability which is the notion also from the CDN Oh paper and here we're able to use short keys and the the difference here is that there's a bounded number now of alternative messages and these are decided at encryption time so like I showed you on the first slide it's surprise party here's the plans and then you know dad is was so lame today and those are the two messages that now Alice and Bob will be able to equivocate to when when the when coercion time comes but the advantage of this is that the secret key now is independent short independent of the message length and another plus about this for sure stuff who's not here you can tell them later is that there's no issue with agreement now of which message they need to consistently open it as so the fake message is kind of baked into that cipher text that was sent over and you know assuming that one of these is a bombshell whistleblow document and another is something innocuous it's pretty clear which one they're going to want to open it up to yeah you can encrypt arbitrary messages yeah so the two messages you know the bombshell whistle blowing and the more innocuous plans for the party are both you know can are arbitrary but then they can open them to either either of the two wait around ok so is it pretty clear all of this is without a formal definition yet but well yes the receiver has to receive the sacrifice in both cases yeah and that's that's actually necessary as well if you don't have the ciphertext and you then commit to a secret key than the correctness requirement requires that if you give out the key first and then any cipher texts will have to actually decrypt to what it it's real me but thinking it's going to inherently have to rely which all right good so let's see oh good and then actually one automatic consequence of this is that we get improved in constructions of non coming encryption in particular in the lattice scenario because the best-known lattice based non committing encryption is three round it's a rather three flow so it's an interactive one and now we have not interactive so that's just a plus okay so let me give you a definition of sender deniability there's a few different ways you could imagine defining this and there so we'll just do one for the moment and then talk about some variations you might consider so as with any public key encryption scheme we have a normal keygen encrypt and decrypt algorithms and then I'm also going to ask for what's called what we'll call a deniable encryption algorithm and a faking algorithm and these are for the sender so as just as a warm-up we're going to do just a sender deniable scheme ascender deniable definition so the definition is pretty easy to to write down let's just say for the moment that the messages are bits so we're asking that for any two bits b and b prime on the one side we have the actual experiment where a key gets generated and the bit B gets encrypted and then the view of the adversary is everything well not everything but it's everything from the sender side so you get to see the public key the ciphertext and then you get to see the coins are that the sender used so this corresponds to the coercer came to this the receiver to the rather to the sender coarser course the sender and got their real coins okay and then on the right hand side we have something like that we generate the key pair but then the sender now will deny ibly encrypt this bit B prime which could be the same or could be different as what's in the left and then at the time of coercion the sender will now create some fake coins our star using its knowledge of the previous coins in the previous bit and then give out well now it hands the fake coins to the coarser so what this the nice trick about this is that this definition by its self is strong enough to give you semantic security and and other nice properties so this is kind of all you need for this yeah so if you want to fake in this case I mean in the for the definition itself you would keep the original coins around unit sorry so it's actually nice in operation you don't need I mean depending on the scheme you sometimes don't even need to remember the coins that you use for encryption but you can then get them from the cipher text itself the answer is it depends so sometimes you might have to store them at other times you might not but right if you can race you say i erased them right and right right so that that's just the thing if you if you weren't going to keep your coins in the first place you might tell the coercer that you didn't keep your coins around and not give them anything so actually we'll talk about that in a second what does what that means ok so this is a pretty pretty simple definition to wrap your head around a couple of just a variant you could consider is one thing I just want to point out is that the encryption here is one algorithm and then the other possibly separate different algorithm deniable encrypt that you're running if you want to create a fake able ciphertext so you can also consider requiring just a single encryption algorithm and there's basically a generic transformation that will just do a mix and match of these two so you can create a joined single algorithm from joining up the egg here and the den Encke so it's just it's a generic transformation one one caveat there is you only get a 1 over poly distinguishing advantage there so it's a the genera it's a parody the parity trick so too I mean to what i would do is if it's a bit hard to say in words if i wanted to encrypt a bit be then i will do a random I'll do let's say K times I'll run either this or this each time a random choice one of the two K times and then I'll encrypt bits which may be X or to the real bit that I want to send and then all I need is to flip one of these denix the tune Inc yeah so that that's basically the generic check there ok so that's just the warm-up sender deniability so let's think the real definition that we want to achieve is by deniability we again have three the three normal algorithms as before and then we have the deniable key gen as well as a deniable link encrypt and we have then a faking algorithm for both the sender and the receiver ok so the again we ask that these two experiments be indistinguishable this is exactly the one before except now the the coercer owns the whole world so this experiment happens and then the coercer gets not only the public key but the secret key not only the ciphertext but also the randomness of the sender and that's everything in the world ok and then what we ask on the right side is now the sender and receiver are going to run the deniable algorithms instead and the deniable generator will output a PK but then something we call an faking key so this might be a different behaving thing it's a faking key deniable encryption is the same it takes the public key in the bit and then these two are going to fake something so the receiver will now use his faking key and the ciphertext that he's received to fake a give a fake secret key which should look like look like this secret key in a real honest experiment and send the sender does exactly the same thing as before and note that the inputs to these two faking algorithms are independent so there's not I mean they can run them locally sorry yeah yeah so the be here has to be the same thing but but that's it what's up Ellie on the ciphertext I mean right so the ciphertext is public information so the B is the coordination here as I mentioned before if you're doing plan ahead then there's no coordination needed i mean the b is implicit it's already in the ciphertext so with plan ahead deniability there's no be here in either of those okay and then I mean just the technical point is that the receiver faking algorithm here I've got it out putting a fake secret key you could even ask it to output fake coins for the key generator which is strictly which is at least as powerful in fact that's what our scheme does it up with coins for the key gen but just thinking about it as a secret key is the most convenient thing to do and we can consider merging key gen and deniable gen together into one and that all works as well just generically so for clarity I think it's a bit easier to think about two different algorithms you know actual deniable algorithms and then using the faking to produce the fake secret key and coins yeah and this will get this will have a negligible indistinguishability and then if you join them then you'll get that one over poly thing which is a little bit of an annoyance okay so I just want to spend a couple slides yeah it comes well it's a distance the distinguishing advantage between the whole two experiments if you were to use a unified key gen and a unified encrypt and it comes even with one side even if you only need to do it for one side and and it's the same if you do it for both sides yeah yeah correct that's right yeah this is kind of an artifact of this the parody thing kind of only goes in one way of them you know okay so maybe a slight philosophical digression or discussion is whether any of this is meaningful so the first thing when when we talk about deniability that that people often reply is well everybody knows you're running a deniable algorithm and everyone knows the phone coins and keys could be fake so you know who do you think you're fooling don't be a jerk and do you're absolutely right to be thinking of this we're not fooling anybody so the point is not to convince your coercer that you actually sent something but instead to actually preempt them from being able to coerce you in the first place so let me unpack that a little bit you know think about the most perfectly secure secret communication you could imagine like talking through a lead pipe you know to and only the person on the other end can hear it then this is inherently deniable whatever went through that pipe later on you know it's gone it's ephemeral and later on nobody can pin you down on what what was said through that pipe but encryption on the other hand is typically like entirely coercive ille so if you send a ciphertext across there's only one possible meaning that that cipher texts could have had okay so this is this is trying to undo that undesirable side effect of encryption as as a tool for a confidentiality so the purpose is again not too convinced the course or an event of anything but to use a scheme which then does not open yourself up to being coerced in the first place so it's the pre-emptive aversion and to be able to give a completely you know non falsifiable credible explanation for what happened that can't be disproved and explains you know everything that you could have possibly done a second obligation or a second objection rather and something people think about well can't the user just say well I just erased my coins I don't have them you know when the coercer comes calling in fact it was part of my protocol to erase them so sorry I have nothing for you and and isn't you know it morally equivalent to just saying I don't have them even if you do and I think I think not I think there's a qualitative difference between erasing or claiming that you don't have something and being able to give away a fake explanation for it one is that you might be in a regime where you're subject to audit or something like that and so you're required to keep all your actions you know if you're in the financial industry or something like that then you have to keep all your randomness and then if you get a subpoena you'd have to surrender it and then another is that you know for sender ascender creating a ciphertext it is kind of plausible that well they created they chose the random coins they needed to produce the ciphertext and then and then threw them away they were ephemeral there's no reason to keep them around which is the point we were mentioning earlier but for a receiver it's really essential that the receiver keeps his secret key around the whole point is that he has a secret key to be able to decrypt so it's not really a credible thing to say oh I just happened to over race my secret key even though you know I'm supposed to use it to decrypt so anyway that's that's my justification for why I think this is a meaningful motion alright so let's do some technical stuff this is a tool for deniability called translucent sets that was introduced in CDN oh and the idea is that you've got the public public key public some description of what's called a translucent set and it comes with the secret key secret trapdoor and the set is basically you just have a universe let's say it's like all k bit strings and within this universe there's a set p of special strings and so this is the property of a translucent set if you're given PK then you can sample from this little pset here ok you can efficiently sample from it and of course you can sample from you just by picking at a bit random string so you can sample from P and the main property is that a P sample is pseudo random so it looks like a you sample can't distinguish given only the public key you can't distinguish whether something is from P or from all of you and so this immediately means that it can be faked as a you sample it so if you pick from p you can actually claim that it came from you I have this K bit string it's K bits and I say well these are just the K random bits that I chose when I was sampling from you and nobody can convince you other you know nobody can be convinced otherwise so this allows you to claim that even though you you know even though you picked a P it allows you to claim that you picked from you nice idea now given the secret key you can actually distinguish there's this trapdoor allows you to distinguish between a PNAU and so that's what's going to allow the decryptor to get the true message of the true meaning rather than the the faked one so there are many instantiations of this idea you can do it from pretty much general assumptions trapdoor permutations decision diffie-hellman lattices pretty much any assumption you like so let's see how this would give us deniability you were going to define the normal encryption algorithm as follows if you want to encrypt a zero bit you'll send to you samples and if you want to encrypt a 1-bit you'll send to you and then a p if you wanted to encrypt something dinaya bleeh then a zero would become a p and a P and a one would be the exact same as before just a you p okay and Bob can use his secret key to tell exactly which is you and which is P and so he can decode the right answer here so he's just going to decode if he sees two of the same thing he calls it he 0 if he sees a unit P calls at one and that's all so let's see how this allows deniability it allows Alice when she gets coerced to fake things because she can always claim a P is actually she can always claim even though it's a P she can claim that it's a you wherever she wants so you can just kind of check if she created a deniable ciphertext either a 0 or 1 she can always go to a zero or one up top just by turning peas in to use and claiming them so that's that's just an easy check and the problem is what happens when bob gets course because the secret key allows actually reveals the true the true values okay so this is precisely the the technical problem we're going to address and it's something we're calling by translucent sets so the main properties there's couple of main properties is that the first is that a public key now has many different secret keys and each of them are going to give you a slightly different behavior when you test which whether a sample is a you sample or a piece ample okay so the test is just slightly different so if I have you know if I have one secret key it might call these things peace p values but if I have a different secret key am I call these Peavy leaves so my own test is slightly fuzzy and it depends on exactly which secret key ahold and now for a if I if I'm given a P sample that was chosen by the sender then most secret keys that i could have are going to classify it correctly so they'll call it a P sample with very high with overwhelming probability okay but there are some very rare secret keys which will miss classify this this value X here and moreover if I have the faking key for this for this translucent set then I can actually generate a good-looking secret key so a perfectly valid looking secret key that's going to cause this this miss classification okay so it's crucial here that I'm util i'm looking at x and i'm using my faking key to actually find a particular sk star that's going to give me exactly the wrong decryption it's going to call it a you even though it thinks it's a p even though it really is a p and this is essentially what's going to allow us to fake for the receiver Bob to fake the secret key and it's really crucial here that this be a good-looking secret key I mean you could come up with some totally bogus secret key that just causes the decryption algorithm to crash and that you know that would be something but the secret key looks abnormal it's not something that would be encountered in in real life so that's not going to convince a coercer okay so we're really going to have to to work pretty hard to get a good looking secret key here so everything clear at least abstractly all right so now I think we're going to have a picture of a lattice yeah yeah I wish I could do it both ways but I don't know how to do it both ways so if I could take a you sample and come up with a secret key that makes it look like a pea sample then we would have solved this one over poly issue and I mean you can imagine possibly there's a way to do it but so far no luck so maybe maybe if we keep working at it but in other words the point the point here is that if you have the faking key you can induce like what we'll call an oblivious miss classification of a pea sample so oblivious meaning when you do the decryption using that fake secret key it gives you the wrong answer but you don't know it gave me the wrong answer that's crucial that you don't recognize that when it's giving you the wrong answer okay yeah right right right so if my if Mikey my secret key is kind of chosen independently of the ciphertext then with overwhelming probability will give you the right the right classification okay so here we'll see how to do it with lattices so we'll just kind of do a proof by picture here good okay so on the left side we have a lattice that's the green points and the secret key in the lattice is roughly speaking going to be a random short vector within some bounded radius ok in this lattice so the public key is a description of the lattice and then the secret key is a short vector formally speaking it's drawn from a Gaussian distribution over that lattice and you can generate these two things together and then the way one sends a secret rather the way that one sends a ciphertext is kind of the standard way that goes back all the way to i typed work of roughly how these work if you want to send a you sample you just send a uniform X in what's called the dual dual space and so what this property what this means is that if you take the secret key and do the dot product with X you're going to get something that's uniform so you take a dot products you get a real value and it will be uniformly distributed modulo 1 so that's fractional value fractional part will be uniform the reason this that's so is well X is uniform if you take the dot product of any primal vector with any dual vector you get an integer so that's going to be one fact that we're using so you get an integer and then this other part is uniform over the fractional part okay so that's how you send a you sample a P sample is an X that's very close to this duel lattice so you take a point in the duel and you add a little bit of Gaussian error and you send that over as the ciphertext and the nice property here is if you take a dot product of the secret key with X it's going to be very close to an integer and the reason is simply that SK dotted with this point is exactly an integer and then sksk data at that point has to be still very close to an integer so you can distinguish a you sample from a piece sample and yet an X chosen close to the dual versus uniformly in the duel is these two or things are indistinguishable and you can show this using learning with errors good so this is the basic translucency this says that you can define a P sample you can define p samples and you samples which are indistinguishable and but which can be distinguished if you know a short vector over here on the left okay now we have to do the hard work of the receiver decide the receiver deniability the receiving receiver faking rather so the faking key in this case is now not going to be just one short vector as in the normal case but it's going to be an entire short basis of this primal lattice and everything I'm about to say you'll have to take my word for it we have algorithm for generating these things and doing the steps that I'm about to describe so what we what we have now is we want to be able to fake this p sample X we'd like to be able to come up with a fake secret key that makes it look like a uniform you sample so the way that works is we're given this X and we're going to choose now a secret key which should be a short vector in this primal lattice we're going to choose it highly correlated with this red red vector this little red offset here so we're going to use the faking key to extract out that red offset and then we're just going to you know go some random distance in that exact direction and then pick a secret key somewhere from that neighborhood ok so the dotted line is I'm going you know north-northeast for some random number of steps and then I pick the secret key vector from around that neighborhood ok so I'm just inducing this big correlation between the secret key and the ciphertext and when you do it in this way the reason this is going to work is that the secret key because these two things are so correlated their dot product is going to be very large and because I walked out a random distance it'll be random modulo 1 ok so basically when you take the dot product here you're going to get a uniform value and so were you to decrypt it that way it would look like a you sample ok now I haven't I'm not really done I have to show that the secret key looks good you know it doesn't look suspicious in any in any way so from 30,000 feet you know the main issue is we've got this fake secret key and we chose it in the way that's super highly correlated with X why should it look normal ok so the main step here is going to be some statistical trickery going on and imagine now an alternative experiment where instead of doing it by choosing X first and then secret key correlated with X we're going to flip around the order in which we choose things so now I'm going to choose my secret in the completely normal way just as a short vector centered at zero and now i'm going to choose my ex to be very close to the lattice but then I'm then I'm going to walk out again in some some random distance in the direction of my secret key ok so whereas before I was choosing my cypher text and then choosing a fake secret key in the same direction now I'm choosing a normal secret key and a ciphertext that's kind of pointing in the same direction as well and what you can show is that in these two experiments you get the exact same joint distribution between the ciphertext and the key they're basically just two gaussians with some core the same amount of correlation between them and there's some some nice nice map that allows you to show that they're exactly the same yeah statistical yeah it's just two gaussians that have a certain random amount of correlation between them and you can view it as you know I mean you can view it as one depends on the other or they're just jointly chosen from a Gaussian that has some covariance so so now what we've done here is we've got a normal secret key and then we've generated our ciphertext by this process where I'm taking a Gauss a little bit of Gaussian stuff and then walking in the direction of the secret key some random distance and under lwe now I can replace that red stuff that little red offset with actually a totally random x ok so my random X now I'm still walking in that direction but if my random X is uniform my X star is also uniform so now I'm left in this world where I've actually got a normal secret key and a you sample over on the right and that's exactly what i wanted to arrive at wanted so basically the point is that the where you were you to fake a secret key to cause this thing to look bad it's indistinguishable from as if you had had a normal secret key and someone sent you a random ciphertext in the first so yeah that's the 30,000 foot explanation any questions so far okay so just going to wrap it up so the basic scheme that I showed you just a bit by bit encryption and it needs to use a fresh key for every bit and a fresh secret key for every bit pretty pretty bad big keys but again as that's inherent if you want this full like equivoque ability so there's what we do for plan ahead deniability is this just a hybrid encryption basically you encrypt a short-short symmetric key and then you can use that to encrypt an arbitrarily long message it's pretty easy to see how to get this but it only allows you to open this ciphertext as one of two or some plan ahead you know predetermined number of fake messages so an open question really is to get this full deniability like a unified key gen and encrypt algorithm and with the full you know negligible distinguishing advantage pretty close hope you know maybe there's some way to do it fake a you sample as a piece in a pool but we don't quite see it yet and then the other thing is is kind of weird we only know how we worked pretty hard we only know how to do this with lattices it seems like a pretty nice abstract definition this at least at the level of translucency and it should be possible to get it from other kinds of assumptions we have maybe a candidate based on pairings but it's it's unclear whether it works yet but I don't know you need this somehow to have this oblivious decryption error like a very rare a decryption error that you can't recognize when it happens but you can force it to happen if you have a so I don't know I think it's a nice a nice primitive it should be in stantial from other assumptions and should have other applications as well but you know the search goes on thanks yes plane I had been out doing the about my practical side and I have a paper that kids get the it is projected all the time yeah not bring list and the reason is you touch on that that I'd like the motivation so so the mean prison like that is the pollak if you're going to use like a specialized system like yours people I mean you're really cheating right because if you are somehow increasingly using you want to send reliable messages okay so environment you in touch before like you have a regime that enforcing two persons i've used this system but use of the maca so you cannot directly so how would you respond to that i mean i had our time responding to this convex i mean the thing that that i would argue and will argue and the paper is that if you're in a regime which is going to force you to use a committing encryption scheme and provably committing then that's you know that's where your left where you're living and you might as well posit a regime where encryption is not allowed at all so the question really is not so much that are you cheating by just using this I mean someone can say well why are you using crypto in the first place if you have nothing to hide and I think this argument is kind of kind of bogus and has been shown to be so but the point is not to really say that you can you know that you're out to cheat or anything like that it's really to prevent coercion in the first place so and you can't say you know in the future what's going to happen you know if you were some human rights worker in China or something like that or you need to cross borders between regimes it's entirely within your you know prerogative to use these kinds of things and I don't know I say it's hard to live it so the ideal solution will be something like using some standard encryption scheme right hey good night that's actually one thing I wanted to point out the encryption scheme that we define here is actually quite natural it's not absurdly you know pathological and how it operates it's almost I mean you can view it as kind of a twist on I tied the workers and look it just happens to have this deniability feature to it so you can argue that you're you're using the scheme for other properties that you may like of it and happen sep deniability depends if you're officially play I hate well it's a different definition but I think you can then I will encryption on different assumptions yeah so I think David yeah just in particular electronic voting would be a place where we do actually have my ability is lets you you actually have my head this is a list via who the candidates there was a fake e which is a sort of mazes crazy with a big but other than asks for this show is basic and he can actually find gun you can then you can see what the real message okay and you say look I don't have it I never had it in the first place what yeah I mean if you are using this fighting and we are using the system probably we can also generate this sword base I never generated so he gets access to you I used I use the recommended a key generation procedure which is the software default gossip we forget the sandwich good writer okay to be able to move this database to be able to fake to take a secret key annual basis if you do not get the bases you can of baked so so this is lovely if you want to fake then you can generate with the bases I did I never needed to pay I never intended to fake anything so I didn't generate okay Your Honor leave the system that Billy won't to face or you give the basis song yeah the system just seemed really natural as if someone has that so some corrupts your system right now yes absolutely so that I wonder wall between coercion and you were drugged yeah I want to distinguish pink origins and corruption because if someone breaks into your system before you have a chance to you know create to fake thing that you want and gigadyne so this is really for when there's a protocol involved in being coerced it's a subpoena or it so the subpoena is usually subpoena your hard drive maybe I mean in one case you can actually prepare a hard drive fifty-fifty human other structures a one-time pad is not just inherently by denial but there was a real live case of equivocation undercover back in the forties good that's it yeah so the one-time pad is it meets all this it's it's a secret key scheme of course but so good enough hey it was is when you're purchasing mission do you and foldable said with the brother legal of course that comes to you exist you know I want to see this sineplex welcome to this mess and the two possible one who generate this is the real each additional burden in this case you get for that second the texture exact text segments okay so using the real encryption objects on this case you get off that's right yeah only you generally did you see the movies you can equip it but then you know that I know that you know the secret key the short basis yeah wait wait wait you're confusing something because the basis is on the key gen sighs oh so this is the encrypt is on the sender stuff there's no basis so again I mean I want to be clear that I don't I don't think I you know I'm not fooling anybody I'm not convincing the coercer that this is actually the message that was sent or received it's just I want to protect myself and by pre-empting the ability to be you know coerced and pin down particular message and so it's really more about you know if you view game theoretically the coercer is going to know that you're not going to go along with it and so maybe they won't even try or if they do and you can't I don't think you can give us a peanut saying give me the short basis for this stiff no such thing ever existed it would be like you have subpoena for the unicorn back in the yard correct so just look at you two p proceed orthogonal edu my team is a flipper you to a pu you somehow have to get yeah yeah I mean you want to get a vector which is pointing very orthogonal to the come it's roughly speaking it seems hard just to make roof go through when X is uniform out there it's got to look properly distributed let me go orthogonal E and I don't know some somehow it just didn't seem to have it here oh yeah super cute if I were in England Russia too yeah if so the problem is somehow that when you even though you walk out in that direction you've still got a sample secret key from around that region and just the radius of that is already too large it's going to interact traffic with the random acts so you can't kind of keep it under control not seem to be the Tucker problems it's no argument possible possible claim that yes it would be right it'll be yeah so in been busy in another cell i'm sending a ciphertext that's quite all right well the they actually the the prescribed encryption algorithm is also telling you to send a message to I think perceptive exit epsilon that's that long I say so you're intentionally having out yeah so you can only equivocate up to however much padding kept assignment on or just in the new queue % in your key every time the thing mean we had this discussion before but they opened by analogy if you agree and some points a fake encryption and fake poop frog didn't have more functionality that's some sensors and some sense of course people are skeptical as well name is a come to you in the soil use the kind of real algorithm make it sweeter polity just take me like the situation that you know you come to a service in the service like a 10 ask if you want to get like five bucks or not and you know you say yes you can't sit no you don't get it done even when you come out somebody will tell you give me 5,000 you say I chose know yeah something that's so kind of game theoretically people salute functionality watching you know why won't you pick up five bucks but you know they can't prove for more important you said you know right that's I mean that's what this deniability is about having a plausible you know nan falsifiable story that accounts for everything the figure they just have to this to give to the technical needs to do it you know the story apartment know what but technically of the bishop you kid another thing is denial Cristiano not in any profit off the Nexus simulation easier for you uses a technique part of a larger protocol you might end up finding somewhere this makes certain like much more efficient and you get to buy in my abilities other properties time for free so have you thought about life our particular functionalities that things were easy the truth well I guess one I mean one consequence of this is that you get now a two-round non committing encryption scheme whereas before we only have three round so that's that's a the positive side which if you don't even care about deniability faking or anything he's got a better just to just cut out of this product dump Russian but CCA security use like coffee table read yeah pretty much actually it's a group something like just truly from the definition it is a really good question I'm not I didn't think about it too much but it it's not clear whether it rules out CCA entirely Bethenny you just you know they the public unit 1.2 you could give him one quick but I name in any other you can use public uniform on corruption but you can you think using Christian typical a symmetric security that you can do so many messages but just don't need to lie about the tourism so the CCA question is I don't know that it's like ruled out because I'm here so so this is the last talk and they were just going to talk about we have the technology now we're next all right thank you very much when I want to talk to you today is a little bit different from the other talks we've had so far I want instead go over some of the things that we can do not we've had such progress in cloud cryptography so this workshop and the other work in the last five to ten years of seeing breakthroughs in cryptography we now have homework encryption we now have efficient secure multi-party computation at least for some functionalities and we have tamper-resistant cryptography these are all problems are motivated in it by practice and are making serious that theoretical strides and what we want to do with this I suppose is we want to make things that were previously impossible possible so for example I recently heard we had a customer who wanted to run some inference algorithms over data held by our search engine and they didn't want to reveal what that algorithm was and we of course don't want to reveal all of our search logs to everyone in the world so that sounds impossible but with the techniques that we have today we know that's becoming more practical and in addition there's a whole class of people who would like to move to the cloud but there are things standing in their way security concerns most among them and one of the challenges that we have here is that these techniques we're talking about can perhaps get rid of some of these roadblocks and move people onto the cloud who otherwise wouldn't have done so so I just want to share a few sort of things I've run into they've actually stopped me from running things on the cloud and I want to sort of match those up with the kinds of things that we talked about in workshops like this this is the Amazon Elastic Compute cloud terms of service from Blastoise sent as of last December how many people just get feeling how many people actually run something easy to okay well we know you do yes Tom of course is our resident ec2 internals expert but I want run an internal tool on easy two for testing purposes to see how well it ran on the cloud and so this is fine but then I had to actually go read the Terms of Service does anyone want to guess which of these terms is objectionable any bets anyone yes actually both of those 441 and 442 or the answer in particular this one that you agree to give Amazon a copy of your application so they can make sure that it's verified compliance with this agreement I didn't include all the context but really this agreement is saying you're not running gambling you're not running malware you're not trying to hack other people's computers and so of course you see that and say we'll wait a second one of my applications proprietary like a pre-release version of some product or if it's not even a to be released but what if it has proprietary information what if I don't know what verifying compliance means that's fairly vague who is going to see this application and it's vague from this whether or not data on which my application works on if that's included or not so this actually was a blocker this stopped me from running something on amazon and so in the context of the stuff we're talking about today well you know we have some tools now that we could address this concern we could obfuscate yes exactly and that's the kind of so the question was could we give the application a uracil Turing machine and the answer is yes of course we could do that and that's a sort of a crude form of obfuscation right because it wouldn't give them anything useful but how do we actually convince ourselves that's where we that's the sufficient thing to do and so these are all different things we could think of we could do some computing unencrypted data to alleviate the issue of having to give the data set we use or one thing I've heard happens in practice is people only ship small amounts of data at a time to the application in the cloud so they lose some of it it's not a big deal but then reasoning reasoning rigorously about all these is one thing and another thing that comes up is how do you convince yourself and convince your up your peers that this is the right thing to do so another thing it comes up when you start talking about can I run very high security jobs in the cloud there are these physical security requirements that come up so you have here on the left this is part of a best practices checklist for this standard called SAS 70 SAS 70 is not super interesting it's just a list of things you have to do in order to get a certain stamp which then tells you yeah my data center is good to handle super secure material and these are all kind of very interesting sort of physical security things like you must have bullet-resistant glass you must cut your vegetation so the intruders cannot hide behind it you must have a man trap Oh a man trap is a sort of entrance device where's two doors and only one of the mill open at once so you enter the first door it shuts behind you and then if you're a bad person you can't get out so you know we left but this is actually fairly serious because the people who wrote this standard we're thinking in terms of that picture over there whether these giant cages and there's one cage around each property that holds a different classified or high sensitivity data this is not very elastic right you're not going to have people in as your ec2 going around and moving cages on the time scales it takes a spin up VMs and so this leaves us another issue which is on several standards including summon ones we have here require proof of deletion if i give you sensitive data for processing i need you to tell me you deleted it or i need to have some Claire I need to have some way of knowing that was deleted now of course if you have your own machine that's fairly straightforward although not quite as a drastic as this but then this imposes a cost in a cloud provider which some of them aren't willing to take and what do we do about this from the point of view of people who are familiar with cryptography and know how to use it well one thing that's sort of working our favor is that not every piece of data needs these kinds of protections there's actually a movement towards labeling data at least inside Microsoft and some of the people are using our stuff there are different ways to spell it but kind of low medium and high so around here we actually asked to sort of say oh is this data low impact medium impact or high impact what does that mean well low impact are things like no public web pages public keys stuff where it's pretty much okay to be distributed to any money an entire company we don't really care if it gets out that much we care but we don't care that much medium are things like addresses phone numbers or personally identifiable information and then on high you have things like credit cards passwords pre-release software and that's where we started getting these physical security requirements where people start saying hey wait a second you know I'm not going to give you this data unless you can tell me that's got guards around at 24 7 and we actually have this thing now where the new SharePoint that's our wiki solution will actually ask you to do mandatory labeling of data so you can know any given point in time is this like the high-security the medium security or not and what does dovetails the work we're talking about here is it says this is where you want to apply some of the techniques we've been talking about and some of those things may have to stay on premises and other things may go into the cloud so the other thing that comes up that's interesting is we actually scan for high security data so we actually use an RSA product that searches for high secured at 'it's but that's been identified as high impact and we look for cases where people accidentally posted this data in places that they shouldn't this is actually interesting when you talk about encrypted data because now you have a tension between this product which we liked because it sort of finds the social security numbers people left lying around on that wiki page and the fact that the day is encrypted which means it's harder to scan so this is a question of you know searchable encryption might be a way out here but we do have these processes intrusion detection is another one actually where people say wait a second I have encrypted packets going through what do I do if my want my intrusion detection system to actually look at these packets and tell me if they're good or not so figuring out a way to do that without completely giving away your keys to everyone who says they need them for a middleware box is a difficult problem so the other thing I just want to bring up to are there these audit requirements that are starting to emerge for cloud services this is one effort this a six group they have an automated way of querying your cloud provider to say does it have certain audit standards so if you're a cloud provider you can put up a little text file saying yeah I have this certification and yes I do have cages around my data center and yes I do have this kind of encryption used and the goal of these guys is to have you as a customer of cloud providers be able to serve right little scripts that say okay I can want to pick whichever cloud provider meets these minimum requirements and I want to just go ahead and host my day of whoever's cheapest today well this is interesting from the point view of cloud cryptography because now the question is can we actually use cryptography to assist this auditing can we use this cryptography cryptographic mechanisms that we've talked about in this workshop to sort of prove that we really are meeting the guarantees we say we are at lower cost than having hordes of people from pricewaterhousecoopers come in and inspect this every week so for example this is just one example is some project we have here there are many other examples you can think of but we have something we built on top of as your blob storage which is just a simple key value store very similar to amazon s3 and what we do in this system is every time you end up storing data or getting data we ask you to write a little what we call an attestation think of as a simple log message that chains back with all your previous log messages and we built this in a way using public key signatures that anyone in the world can verify these logs so what this means the third party can audit the use of this system and say okay if there was a problem if these people storage or the blob storage into corrupting bits or end up trying to replay stale data a third party can tell and then you can sort of say okay we screwed up and here's some money back for your trouble and that's just you know one example where you could use some cryptograms cryptography to produce an audit trail that could dovetail with some of the work people are already doing in the community on getting cloud auditing solutions and we also did some you know performance and show that you can get as a modest cost so if you're interested in this we have an implementation talk to me and I can show you the paper and everything but what this is all going is I've talked about some places where we have some possible synergies between what people are doing in terms of policy and what we've talked about in this workshop but the truth is there's a huge catch up left to be had between the people are making these policies and those of you here in this room so for example I'll just give two examples one is over corporate policy and the other is from European data protection which I was lucky enough to talk to caspar bowden who was here yesterday and we got a chance to talk with him both of these have a notion sort of sensitive data where it's difficult to move the data outside of some perimeter in the corporate case we have this high impact data we're not really allowed to move it outside of premises we're not allowed to give it to even as your or even easy to and in the European case we have personally identifiable information there's a whole set of things which are not supposed to leave the European Union there are complicated exceptions of this but in general not supposed to leave now the question is what if I encrypt this data and the key is not available to the cloud right so I cake and encrypted to a blob and I put it on the cloud is that still high impact data is that still data its governed by these rules and the answer is yes actually you do not get a free pass just by encrypting according to the rules as they are now now we know that if the keys are handled properly we know that doesn't make any sense and we know based on this stuff we talked about here that you can still do a lot of high functionality by leveraging the new breakthroughs and cryptography that we've had and you maybe even get better guarantees by having things in the clouds and the storing them on premises but the fact is there are teams who said the requirements today don't understand what we're talking about yet so inside a company it'll be a bunch of teams and legal engineering IT who get together and hash this out over months and months and months I had a chance to talk to Casper and ask him so who an EU is making these rules there's about 2,500 people who are civil servants in the EU who make these rules and about 10 of them know what they're doing from a technical point of view so this is sort of the thing I I just wanted to set him up here and say is that look we have this amazing technology but there's a bit of work left to do before we can make the policy rules catch up with it because it sum up we have incredible progress everything we've heard about here today even if it's not completely practical yet you know we've seen in the last five years a lot of emphasis on making really practical technologies that can be deployed yesterday it's you know Dana sugar beets tomorrow will be the world but the question that we have now is how do we make them you know match up with the kind of rules people are putting in place which are preventing these applications from happening on the cloud or happening at all and then our opportunity is to say how can we sort of dovetail these new techniques with the kinds of audit requirements and kinds of data security requirements that we you see people wanting to have and finally I just want to say that there's an opportunity for us or an obligation for us to educate policymakers that another world is possible that we can do some of these things that we're talking about but we have to actually engage and figure out you know who exactly is the right person to talk with and what will it take to change their minds so thank you you