you today we have Yuri Polyakov from NJIT so Yuri is actually an expert in a lot of things ranging from like astrophysics the statistical physics to like Chemical Engineering and he has like all of the degrees he's also an expert in implementing homework encryption which is what he's been here for the past week but today he's going to tell you about program obfuscation and practical implementations of that yeah Thank You Kim so is the mic working you just want to make sure so a couple of words so there is a DARPA program DARPA safe wear which I'll briefly discuss and the goal of that program is to develop something practical as far as cryptographic program application is concerned something that's built based on standard assumptions or at least something related to standard assumptions but before we get to that part I'll just a brief introduction to the anxiety cyber security research center so obviously I work at this Center and it's a newly established Center well like 2015 we have some tenure-track faculty research professor scientists and PhD students and the focus of the research at the cyber security research center is on encrypted computing obviously what this is related to mobile Android security and by the system security and generally applied cryptography this particular work that I'm going to present is related to palisade ladies cryptography a library and the idea of this library how it came about it comes again from DARPA so there was a DARPA proceed program and there was specifically cypher project that Carter Olaf was working on and as part of that program specific prototype that was an FPGA prototype of the LTV homomorphic crypto system was created and basically gave some ideas maybe we should try to build a library from it and after this a number of projects some of them are being funded by DARPA some of them are being funded by IR puff Sloan Simons NSA have triggered us to develop to add certain capabilities to this library and of course the name of the library implies that it's based on lattices and so primarily we're dealing cope with constructions based on ringing l.w at this time but generally speaking other constructions such as based on lwe are also supported and some of the projects and that obviously trigger it some new modules that got added to the library are one of them is cryptographic program a sophistication which I'll talk about next then homomorphic encryption for statistical analysis again similar to what is done for example as part of seal then procs your encryption based on homomorphic encryption systems where keep switching is used to perform proxy encryption in a more efficient way than just encryption similar to the bootstrapping based approach proposed by gentry and then another project is they're trying to make a holomorphic encryption more usable and so that's this is a ramparts project and trying to write homomorphic encryption programs outside any c++ library in this case using a julia and trying to make the HESI transparent to the user as if you're working with plaintext and who are they who are our partners in implementation and collaboration so MIT specifically Vinod and Chaffey UCSD danielĂs team with Rapala Tech Bergson are so National University of Singapore Sabine's a bunch University and on the from the industry side we were with the Raytheon which used to be BBN then IBM research with shahe Larry and of course with Victor soup as well from NYU and then some well-known companies such as Lucent burn core Galois to six labs so this library's currently distributed using BSD to our license which means it can be freely used it's an open source library academic version and we are supporting multiple platforms for instance Windows Linux Mac so primarily targeting GCC and clang so a couple of words about the DARPA safer program that triggered this particular capability is the development of this particular capability just I'll mention a couple of basic facts about DARPA in case some of you are not very familiar with it so DARPA is so it's a Defense Advanced Research Projects Agency that was founded back in 1958 in response to the launch of Sputnik 1 by the Soviet Union so the research program was overhauled essentially after a Soviet Union the country that suffers from world war ii was able to launch a satellite much basically sooner than the united states were a few months sooner than the United States and the DARPA was founded and it typically the it works with for example four-year projects there is a certain very challenging and ambitious task sometimes called a sci-fi tasks that no one believes is practical and then DARPA tries to get top researchers from academia from some defense industry companies together to try to build something that is actually practical and solve that problem using various techniques both theoretical and practical and some of the examples of DARPA projects for example arc on Earth 1969 then wavelets Onion Routing and the list can continues a lot of interesting technologies were introduced due to DARPA projects and one of the recent ones that's related to our talk and related to the work that started Microsoft Research is the DARPA proceed program that's finished in 2015 there was a four year program that gave birth to bjv to efficient versions of BGP including the versions with packing so for instance so the original BGP 11 paper was founded by the DARPA proceed program the GHS GHS 12 papers OAS circuit evaluation Gentry Haleiwa smart and in other words HG leap implementation was funded by DARPA then the paper dealing with fact encoding was founded for bjv was founded by that and also the original paper by Brock Erskine scale-invariant schemes which is a prototype lwe prototype for what we're using now for BF e for example so a lot of those schemes were proposed and implemented by IBM Research and BBN at the time as part of the proceeds program so it gave a big push to the field and based on this I would say well based on the success I also still call it success I mean someone may argue how practical FHA is I think it's quite practical for some applications but the research progress that was achieved for those four years was definitely very substantial and based on the success a another program by a different p.m. was started back in 2015 so the program is called DARPA safer and the idea is to tackle I would say that the counterpart problem of homomorphic encryption where you're trying to obfuscate the actual elder's the actual computation and the input can be for example clear text so it's you can think of it as a complementary problem to FH year and the I'm going to report some of the results of our group mostly implementation results of our group and the curve the current participants in that project are MIT IBM research and a lot of well-known universities and so let's let's talk briefly about the motivation for obfuscation so fhe aichi the idea is to encrypt the data and perform a certain function on the data assuming that the function computation is known or in the clear that's you know that's obviously great technology when you want to keep the sensitive data private but there is also another side what if you need to actually protect the algorithms for instance let's say some extensive work has been done to train a neural net and a certain classifier has been built for that and you want to secure the classifier the so a different problem and this problem is quite relevant for the Department of Defense's so briefly mentioned and the idea is you build this classifier you deploy this classifier and you make it available for reverse engineering after that and can we make it hard for someone to actually figure out how the classifier works and the algorithms that are used and the typical approaches that we're familiar I mean that I'm sure all of us have seen that we know that compiled software can be decompiled using various techniques disassemble etc techniques the obfuscation techniques that are currently used in practice are typically heuristic techniques that just basically tweaking certain things but they're not based on standard assumptions they're not typically they are not based on sound theoretical foundations and one of the questions that's being asked by the program Association community or was asked was being asked that I should say is is it possible to guarantee that reverse engineering a program would be at least as hard as exhaustively exploring input-output map so initially the concept of black box approach or virtual black box approach was introduced and the idea of that assumption is if you're given some obfuscated code and you start reverse engineering you start performing analysis you won't be able to learn more from it than just feeding some inputs and getting certain outputs to a black box so that particular model of virtual black box capability was initially used as the main model I'll discuss a little bit more what happened to that model how it transitions to something else and what we can do in practice so just from the motivational perspective examples why the Department of Defense is interested is let's say some hardware is captured that has some very sensitive software implemented how can you make it hard to do the reverse engineering so for instance Department of Defense can sell some weapons to another country in the friendly country but how can they make sure that the algorithms that are used internally cannot be reverse engineered and one of the ways to envision obfuscation again theoretically in an idealistic manner is that you take a certain arbitrary program and you office Kate in some kind of intermediate code for you can think of it in terms of Java byte code just as for simplicity and then you deploy that byte code let's say on some virtual machine or some you know in any other environment and you deploy it in obfuscated manner that's the idea that's the idealistic reason that we hopefully could can achieve or will try to achieve and they of course for the Department of Defense there are certain specific problems that mean it may not require general program efficient capability are very relevant for instance linear classifiers that are used in automatic target recognition systems the and – there could be a lot of examples for example missile seeker or some other examples that are relevant to the Department of Defense it could be nonlinear classifiers there could be some linear nonlinear filters used in signal processing for example that are quite typical in many systems used in the defense industry and the other example is supervisory control and data acquisition control systems SCADA system yes that's a very that's actually very good question and and so one of the challenge problems that was formulated by DARPA was a very simple problem with elegant briefly describe it so you have a 40-bit classifier and and you have five bytes and for each of the bytes you have a threshold basically and that's a very basic classifier problem that basically was used as a challenge problem and obviously it's very easy to perform input-output analysis using the B model that we discussed to learn this using the binary tree approach in four different iterations so that's a very valid comment it's the how to make it hard to learn the full spacing there are different techniques and there are different complications that can be associated with this for example very function etc there could be different techniques to read to make the complexity of learning what's a 100 bit pattern much harder to remove any patterns to randomize in a certain manner than where it would make sense so it's it's it's it's a very important question and there are some solutions that basically we provided for that but that's also very relevant for the problems that Department of Defense trying to solve so it's but it's the security community was questioning that challenge problem and we had back and of course discussions but but there are ways to make this problem more complex in in and so that to achieve a certain level of security so the actual so it's it's it could i I can I'll bypass to the very end of the presentation but one of the ways to attempt basically to solve this problem is to use local token based application so we're we're we're effectively bounding the number of queries that are done and every time you send the query there is a trusted hardware that basically you that users are talking okay only if that token is present and you can get the result of a certain query so techniques of that nature is something that we're actually exploring now to make sure that this black box model itself does not introduce issues especially for relatively simple functions those are very good question and that we've been working on so brief discussion of the general purpose of fist occation program obfuscation so and maybe I'll just go a couple steps down the idea is that this concept of a general-purpose we the application was discussed in barakato papers and it was shown that this virtual black box approach is not cannot be cheat for all possible functions which was theoretically shown that it's this approach by itself cannot be feasible this is why a different constant model was introduced security model was introduced called indistinguishability obfuscation and the idea of that model is that you cannot distinguish two obfuscated programs that have the same functionality in other words produce the same truth table and that's basically used as the notion of having certain level of security so then a couple of inch I would say breakthrough works were done around 2013 that's actually spawned the interest of DARPA in this particular program two constructions multi linear map constructions using gradient encoding were proposed so one of them was proposed from idea lattices the other for the case of integers by two independent groups and that motivated the interest at oh we could have some interesting constructions ggh 13 for example CLT 13 will which how they're called in the industry and hopefully that'll help us to build a general-purpose programming investigation capability general-purpose may refer to general branching programs or so the idea is you can almost work with any arbitrary function in the idealistic sense similar how afraid she could support certain arbitrary computations then so there were problems with this two approaches and they were I'll use the term broken pretty quickly there were various attacks that were found for both constructions and a more secure construction was proposed in 2015 so it's typically referred to as GG age 15 construction that is that also is based on multi linear maps but uses LW for the actual encoding for the gravity induced encoding construction so it's it's so and that's currently considered as the most promising construction that's used however one important comment on on oil general purpose constructions even ggh 15 as the construction itself can be broken under certain conditions it does not mean that the specific in the specific problem application concern action would be broken because it may not be the same case as the case of breaking the multi linear map encoding but there has been a lot of I should say skepticism in the field because of these limitations and I'll start with the initial facts of the program of Education the challenges and show where the field is headed now so now let's so we talked about the security side now let's talk about the best implementation results in the literature about this so the first non-trivial implementation of a cryptographic program obfuscation and when I say non-trivial not something like a point function so it's obviously very easier to write an implementation of a point function you can use any type of hashing technique or etc and you can have an efficient way but building something that's slightly more complex for example typically that's something that typically requires a branching program in this case a permutation branching program is a more complex task and in the very first work and I have some insight on this particular work the results were so great that unfortunately the paper couldn't be accepted to any conference because of the practicality of this because all the reviewers questioned the practicality of this result so the results were a 14 bit point function would take nine hours to obfuscate at sixty bits of security and the program size the obfuscated program size for corresponding to this bridge program four point function would be 31 gigabytes and the evaluation runtime would be three point three hours that was the first implementation result in the literature so it's an there is an imprint paper that I'm providing a reference to so obviously there are some major limitations on in the underlying constructions that for example require a very large program size and we'll briefly talk about them and how some of those challenges have been addressed so in the next paper in CCS 16 essentially the same group of variation of that group was able to obfuscate and 80-bit point function with 80 bits security with slightly better runtimes and program sizes and something I want to know this typically program size in evaluation run time are the two most important metrics were concerned about as we'll discuss later so in this case we had the program size of 8.3 gigs and evolution time was three minutes so getting better so then a group so this so both c16 and c-17 were founded by the DARPA safer program so then is in CCS 17 shahe Larry with Victor Shipman and quarters presented their implementation their attempt to solve one of the challenge problems for their DARPA safe where and they did something non-trivial as you probably notice the first two results dealt with point functions in this case they actually did something like oblivious read ones branching programs something that's a little bit non trivial in terms of the complexity of the actual program and the obfuscation type I did so we should say that this is much more complex program than the two above the application time was 23 days for 80 bits with about 100 accepting states for that particular program and the program size was 9 terabytes and evolution time plus 25 minutes so it's it's a big step forward but obviously not practical enough yet so the results that I'm going to present here focus on a implementation of so called conjunction programs programs which can be viewed as for instance binary pattern with wildcard characters the simple ways to look at it and our results I would say slightly I mean significantly more practical of course the approach that we use was a little bit different from the approach that the IBM Research Group used it's this was more special-purpose approach than approached by shy and we were able to a chief evaluation time of 32 milliseconds which was very close to reasonable expectations for the 32 bit program and the program size was six gigabyte so we're basically were close to solving a specific challenge problem that was set by DARPA and for the 64-bit program with basically comparable security level so essentially close to 64 bits the obfuscation time was six point seven hours and program size was under one terabyte in relation time was 2 seconds so this is this for works kind of give you an idea where the field is at right now or at least as of September of last year so I'll quickly describe the particular construction we worked with and then discuss the improvements so this the logic that we're looking at so we're looking at the conjunction of fist occation for a very basic case of four bits and they allowed input parameters so we have either zeros or ones or we have a wildcard character and so in other words there can be multiple accepting states for for this particular pattern and there are three main operations that is typical for any program obfuscation program there is key generation that's done offline then and it typically deals with things like trap door keys then there is the obfuscation stage which takes a certain program and produces an obfuscated output so something that can be used afterwards that's typically also considered an offline operation and then there is the actual program evaluation so you've generated this obfuscated program send it somewhere and then that program is used to execute it can be used to execute for certain inputs and produce certain outputs very simplistic I mean model but this is this model of key generation obfuscation evaluation is essentially used for all approach to this nature's again what are the possible a use cases for this type of analysis it could be my biometric matching for example iris matching or so that's one of the one of the examples where this could be applied so it just may be a couple of details about these constructions but I won't spend too much time since this is specific to the conjunction of fist occasion so you can think of this input pattern as a fine and state machine and we can even call it a Moore machine and the idea is that you could have multiple accepting states and and as we can see in this case whenever we have a certain wild card character then we can take basically pull both paths and that's the idea so the difference between this and point function point function you have one accepting state essentially in this case you can help multiple accepting states and the input bits in this case if we look at it as a final state machine drive state transitions and they for each state transition in other words which kind of corresponds to each bit we need to generate certain let's call them I mean in this particular case it's matrices of ring elements some big matrices and the multiplication of this matrices afterwards is used to for the evaluation so that's where and one of the challenges with this approach is how big those matrices are essentially that and of course if so there is a logic of computing a product and so I'll briefly discuss the detail so for each possible basically state in the sense you can generate certain matrices and in this particular construction for conjunction obfuscation the construction that we're using so that was proposed by brokers get all we need two sets of matrices and two products that basically use afterwards so as part so let's say we have a parent one zero zero the idea is from using this input values you choose certain matrices from the full big array of matrices and then you after the words you multiply these products for the two parts and when you multiply the in a certain way so obviously you do it in such a way that also preserves correctness of this and at the end you multiply this by the public key and you get the result which if the result is below a certain threshold then the program returns one otherwise it returns something random which will obviously be larger than a certain threshold and this also illustrate the idea of multi linear map encoding it's typically based on this of a chain of multiplications that are at the end that that you multiply by the public key so now some so to illustrate what was done as part of this project and if so this talk is a captured high-level but I have also the paper the underlying paper open so we can obviously get into a more detailed technical discussions as needed but I'll try to give more of a high-level picture so when we started so that was in 2015 at the first P I meeting at DARPA at the end of 2015 we provided so we submitted an artifact and we've provided results for a 32-bit program and this results that I'm showing here are what we provided was for a three bit pattern this if you extend this results to a 32 bit pattern this is what we get so what when we started the state-of-the-art basis was giving us a key generation time of three years in key size of 15 petabytes and the main challenge there is cholesky decomposition matrix which is which basically introduces a number of challenges in the so called trapdoor sampling that is there is an essential component of this type of constructions then the program size for the 32-bit case would have to be 600 terabytes and the obfuscation time would have to be 70 hours again those results extrapolated to the 32-bit pattern and evaluation time would be nine hours so skipping all the work that I read all the details of what we did from that time the end of 2015 until September 2017 we were able to reduce the key generation time to 100 milliseconds so obviously it since it's an offline operation that's not even relevant the key size to one megabyte versus something that was the order of dozens of petabytes program size or six gigabytes versus six hundred terabytes obfuscation time six point two minutes and evaluation time even 32 milliseconds and if I go back to this old evolution time was nine hours so essentially we're able to achieve improvements by multiple orders of magnitude in some cases by nine orders of magnitude in some cases by four or five orders of magnitude so sure generation yes so it's so this is a very good question so and so the keys are you so it's based on a traveler construction which has a certain public key and in certain secret key they we generate trapdoor pairs for each level of the program so that's in there's basically that map's to the to the pattern that we're trying to work with and the when we perform the obfuscation itself the key has to be available so in other words that all the secret Keys have to be available when we perform the fushion itself when we perform the evaluation afterwards we only need the matrices that are encoding the obfuscated program and the public key at the end that we multiply by so we don't need the or obviously don't have secret keys after this but so to answer your question that key size was needed for of the application stage it's not the key size requirements are smaller for the evaluation stage so what are the actual improvements and again it's it would take some time to go through all the technical details but I would be happy to answer any questions so a lot of the improvements came from the optimizing the trapdoor construction and sampling algorithms so the prior work the prior state-of-the-art before this work started was essentially the muchacha a piker 12 paper that described an approach based on primitive matrices so the so-called engaged matrices or gadget lattices and there first of all there were restrictions with respect to the modulus so it the MP 12 provided efficient algorithms for power of two module but not for any other arbitrary modules in our case in our construction we were using prime module then that was very important so we could not use the power of to approach second the trapped the sampling itself is a pretty complex problem essentially related to inversion and it typically includes two stages so so there is a so-called – gadget G sampling and then there is a perturbation sampling and the perturbation sampling typically required a very large cholesky decomposition matrix that had to be explicitly stored as multi precision floating point numbers and one of the improvements that was used in this implementation is the work by so the UCSD a team so Danielle and big journeys where they found a way to perform this perturbation sampling which is basically what makes this the sample spherical essentially after applying the G matrix to use the I would I would say algebraic cyclotomic fields that using the same structure as the cyclotomic rings that are used in the implementation this particular implementation so in other words the decomposition matrix was fully eliminated and it they and the decomposition itself would be done online in a very efficient manner without having to build a very large matrix and a couple of other changes were made that there are quite technical and but it's it's a very special so this trapdoor improved trebler constructions first of all allowed us to reduce the key size we also went in the primitive matrix we went from binary base which was very typical at that time to something much larger for instance in the implementation we used the base of 2 to the power of 20 so the digits essentially of in gauged matrix decomposition and this changes reduce the program size reduced the size of matrices and gave us more many of the improvements and as a good side effect of this this traveler construction was used to implement some more I mean some simpler protocols starting with the GPD signatures a regular digital signature approach that's based on the just direct application of the trapdoor identity based encryption ciphertext policy attribute based encryption and more complex key policy attribute based encryption so those are all published in separate papers but a lot of functions a lot of capabilities that are based on trap doors can can take advantage of this so one of the other changes that was used in this case was related to the generalized alphabet for directed encoding so instead of working with bits actually work with bite and and some some changes in the algorithms were done to accommodate that and why bites because they correspond to certain optimal conditions in this case then we revisited the correctness and security constraints for instance use the central limit theorem and used some similar ideas of that nature and it also required a lot of math related changes such as more efficient matrix arithmetic such as what's the order of performing the evaluation for to minimize the dimension there of course anything dealing with entities since this is a ring based implementation and further improvements at the lower level of let's say integer sampling because the traveler sampling requires so called generic sampling which is much more complex than what is used in homomorphic crypto programs so we we actually added some implementation of new methods like Kearney's over or another method that was proposed by UCSD and of course paralyzation open primer fertilization we leverage it at multiple levels so where we at right now we're trying to implement more complex functions that's one that's one direction and that's a more of a I would say implementation I mean more of the problem related to implementation and practicality then the question is can there be general-purpose office caging constructions that cannot be broken there have been several attempts to build those some attempts related to functional encryption some attempts and that's another question because at this at this time we cannot say the general purpose obfuscation construct secure general purpose efficient constructions exist for arbitrary programs like we cannot make statements similar to what we can say about fhc of course are there any other major algorithmic improvements that are left that would reduce the obfuscated program size and of that would affect evaluation around time as well but that that's that's more of I would say logical continuation of of our research but what's more important for solving the problems that DARPA formulated is exploring other approaches as well so while the initial focus of the DARPA safeway program was on indistinguishability obfuscation at this time multiple approaches are being explored that are not based on indistinguishability obfuscation some of them are even based use fhe in some ways so it's effectively what are secure techniques that can be used to achieve a certain functionality and one of the very promising approaches that we've been that we've also done a couple of implementations of is the so-called token based application and it's it's related to functional encryption in the general sense but the idea is that you introduce a certain trusted component trusted party that can generate tokens for given inputs and whenever you try to evaluate an input you basically get a new token from the trusted component and this bounds the number of queries that you can perform effectively and some of the the the good thing about this is that the gjh 15 construction that we used here for this distributed virtual black box consider a conjunction of FISC asian case can be applied in the token based model so so constraint hiding constraint PRS for example can be used to build the same functionality using the same underlying blocks but use a different security model so we've been exploring this token based approach and actually developed several prototypes both special-purpose and in other words not using trapdoor functionality and more general features of for example like conjunctions and permutation branching programs using this token based approach and and for instance for one of the challenges we were able to solve it and and now a new challenge problem is formulated by DARPA and the other possible direction is to switch the focus of program obfuscation from general capabilities to specific types functions such as evasive functions that could make it a little bit easier to create more efficient implementations so that's another direction that's being explored right now and any question so this is the high level of discussion and I and I have a paper also open so I could answer any technical questions if any questions I guess with the token based approach this allergy scale I'm thinking you trust so why not put the function let you scale nicely yeah it's typically the assumption is that the hardware would be a very small hardware let's say some kind of module but and I'm referring to the defense to the Department of Defense examples that has very limited capability very basic capability just maybe to generate tokens using some PNG or some some other mechanism that's available and this part is considered to be hardware secure while the rest of the physical environment can be considered as untrusted public revelation environment and that's where the obfuscated program is stored which still the obfuscated program is the biggest companies we still so even in this case we were dealing with gigabytes of data so the idea is to use this trusted component only for very basic functionality and upload the a physical program to the basically this public evaluator so that's that's kind of the typical so it's usually hardware slash software systems that's how it least in the in the context of the Department of Defense that's how it's usually position it would probably make more sense our fiscal programs are oh you mean if we ignore the approach that's based on obfuscated an obfuscated but we put the clear text program in the car it seems like I can give you example so typically those programs and again talking about the defense industry examples would need to be regenerated every now and then so there would be mechanism of upgrading the obfuscated program so you would send a new program but the token generator would still be the same so there'd be the idea of maintainability so let's say every some it could the information could be related to certain targets or something like that that information can be refreshed let's say once a week and this model allows to support that so these are the advantages of this approach and alternately of course yeah some some different techniques can be used that are based on secret key evaluation of the public revelation but that's a totally different approach so that you can basically copy an existing network if you don't know the weights you can basically use it as a black box look at the classification of that model and try to train on on the classification of of existing model to basically distill the knowledge that's what it's called so how would this work in the case of miracles who will make it harder to copy the network by this kind of a distillation III I would I would say that this the problem is probably similar to what we discussed for the linear classifier and more relying on something like token based approach would be the probable way to address this I mean the ability to basically steal the knowledge so I would say it's it's it's this particular challenge that we ran into with a virtual black box assumption that's would apply to more complex cases as well that lets us how I would envision this being solved so the same problem in other words in our mind token-based is more efficient yes it introduces a trusted hardware component which but for certain applications that's not an issue any other questions [Applause]

## Igor Gabrielan says:

cryptography.ai