ORAM (organization)
   HOME

TheInfoList



OR:

An oblivious RAM (ORAM) simulator is a
compiler In computing, a compiler is a computer program that translates computer code written in one programming language (the ''source'' language) into another language (the ''target'' language). The name "compiler" is primarily used for programs that ...
that transforms
algorithms In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing c ...
in such a way that the resulting algorithms preserve the input-
output Output may refer to: * The information produced by a computer, see Input/output * An output state of a system, see state (computer science) * Output (economics), the amount of goods and services produced ** Gross output in economics, the value of ...
behavior of the original algorithm but the
distribution Distribution may refer to: Mathematics *Distribution (mathematics), generalized functions used to formulate solutions of partial differential equations * Probability distribution, the probability of a particular value or value range of a vari ...
of
memory Memory is the faculty of the mind by which data or information is encoded, stored, and retrieved when needed. It is the retention of information over time for the purpose of influencing future action. If past events could not be remembered, ...
access pattern of the transformed algorithm is independent of the memory access pattern of the original algorithm. The use of ORAMs is motivated by the fact that an adversary can obtain nontrivial information about the execution of a program and the nature of the
data In the pursuit of knowledge, data (; ) is a collection of discrete values that convey information, describing quantity, quality, fact, statistics, other basic units of meaning, or simply sequences of symbols that may be further interpreted ...
that it is dealing with, just by observing the pattern in which various locations of memory are accessed during its execution. An adversary can get this information even if the data values are all
encrypted In cryptography, encryption is the process of encoding information. This process converts the original representation of the information, known as plaintext, into an alternative form known as ciphertext. Ideally, only authorized parties can decip ...
. The definition suits equally well to the settings of protected programs running on unprotected
shared memory In computer science, shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. Shared memory is an efficient means of passing data between progr ...
as well as a client running a program on its system by accessing previously stored data on a remote server. The concept was formulated by
Oded Goldreich Oded Goldreich ( he, עודד גולדרייך; b. 1957) is a professor of Computer Science at the Faculty of Mathematics and Computer Science of Weizmann Institute of Science, Israel. His research interests lie within the theory of computation ...
and
Rafail Ostrovsky Rafail Ostrovsky is a distinguished professor of computer science and mathematics at UCLA and a well-known researcher in algorithms and cryptography. Biography Rafail Ostrovsky received his Ph.D. from MIT in 1992. He is a member of the editoria ...
in 1996.


Definition

A
Turing machine A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algori ...
(TM), the mathematical abstraction of a real computer (program), is said to be oblivious if for any two inputs of the same length, the motions of the tape heads remain the same. Pippenger and Fischer proved that every TM with running time T(n) can be made oblivious and that the running time of the oblivious TM is O(T(n)\log T(n)). A more realistic model of computation is the
RAM model In computer science, random-access machine (RAM) is an abstract machine in the general class of register machines. The RAM is very similar to the counter machine but with the added capability of 'indirect addressing' of its registers. Like the cou ...
. In the RAM model of computation, there is a CPU that can execute the basic mathematical, logical and control instructions. The CPU is also associated with a few registers and a physical random access
memory Memory is the faculty of the mind by which data or information is encoded, stored, and retrieved when needed. It is the retention of information over time for the purpose of influencing future action. If past events could not be remembered, ...
, where it stores the operands of its instructions. The CPU in addition has instructions to read the contents of a memory cell and write a specific value to a memory cell. The definition of ORAMs capture a similar notion of obliviousness memory accesses in this model. Informally, an ORAM is an algorithm at the interface of a protected CPU and the physical RAM such that it acts like a RAM to the CPU by querying the physical RAM for the CPU while hiding information about the actual memory access pattern of the CPU from the physical RAM. In other words, the distribution of memory accesses of two programs that make the same number of memory accesses to the RAM are indistinguishable from each other. This description will still make sense if the CPU is replaced by a client with a small storage and the physical RAM is replaced with a remote server with a large storage capacity, where the data of the client resides. The following is a formal definition of ORAMs. Let \Pi denote a program requiring a memory of size n when executing on an input x. Suppose that \Pi has instructions for basic mathematical and control operations in addition to two special instructions \mathsf(l) and \mathsf(l,v), where \mathsf(l) reads the value at location l and \mathsf(l,v) writes the value v to l. The sequence of memory cell accessed by a program \Pi during its execution is called its memory access pattern and is denoted by \tilde(n,x). A
polynomial-time algorithm In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by t ...
, C is an Oblivious RAM (ORAM) compiler with computational overhead c(\cdot) and memory overhead m(\cdot), if C given n\in N and a deterministic RAM program \Pi with memory-size n outputs a program \Pi_0 with memory-size m(n)\cdot n such that for any input x, the running-time of \Pi_0(n, x) is bounded by c(n)\cdot T where T is the running-time of \Pi(n, x), and there exists a
negligible function In mathematics, a negligible function is a function \mu:\mathbb\to\mathbb such that for every positive integer ''c'' there exists an integer ''N'c'' such that for all ''x'' > ''N'c'', :, \mu(x),  0 such that for all ''x''  ...
\mu such that the following properties hold: * Correctness: For any n \in \mathbb and any string x \in \^*, with probability at least 1- \mu(n), \Pi(n, x) = \Pi_0(n, x). *Obliviousness: For any two programs \Pi_1, \Pi_2, any n \in \mathbb and any two inputs, x_1, x_2 \in \^* if , \tilde_1(n, x_1), = , \tilde_2(n, x_2), , then '(n, x_1) is \mu-close to '(n, x_2) in
statistical distance In statistics, probability theory, and information theory, a statistical distance quantifies the distance between two statistical objects, which can be two random variables, or two probability distributions or samples, or the distance can be be ...
, where ' = C(n, \Pi_1) and ' = C(n, \Pi_2). Note that the above definition uses the notion of statistical security. One can also have a similar definition for the notion of
computational security In computational complexity theory, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently (where ''efficiently'' typically means "in polynomial time"). It is not known how to prove (uncondition ...
.


History of ORAMs

ORAMs were introduced by Goldreich and Ostrovsky wherein the key motivation was stated as software protection from an adversary who can observe the memory access pattern (but not the contents of the memory). The main result in this work is that there exists an ORAM compiler that uses O(n) server space and incurs a running time overhead of when making a program that uses n memory cells oblivious. This work initiated a series of works in the construction of oblivious RAMs that is going on till date. There are several attributes that need to be considered when we compare various ORAM constructions. The most important parameters of an ORAM construction are the amounts of client storage, the amount of server storage and the time overhead in making one memory access. Based on these attributes, the construction of Kushilevitz et al. is the best known ORAM construction. It achieves O(1) client storage, O(n) server storage and o(\log^2 n) access overhead. Another important attribute of an ORAM construction is whether the access overhead is
amortized In computer science, amortized analysis is a method for analyzing a given algorithm's complexity, or how much of a resource, especially time or memory, it takes to execute. The motivation for amortized analysis is that looking at the worst-case r ...
or
worst-case In computer science, best, worst, and average cases of a given algorithm express what the resource usage is ''at least'', ''at most'' and ''on average'', respectively. Usually the resource being considered is running time, i.e. time complexity, b ...
. Several of the earlier ORAM constructions have good amortized access overhead guarantees, but have \Omega(N) worst-case access overheads. Some of the ORAM constructions with
polylogarithmic In mathematics, a polylogarithmic function in is a polynomial in the logarithm of , : a_k (\log n)^k + a_ (\log n)^ + \cdots + a_1(\log n) + a_0. The notation is often used as a shorthand for , analogous to for . In computer science, poly ...
worst-case computational overheads are. The constructions of were for the random oracle model, where the client assumes access to an oracle that behaves like a random function and returns consistent answers for repeated queries. They also noted that this oracle could be replaced by a pseudorandom function whose seed is a secret key stored by the client, if one assumes the existence of one-way functions. The papers were aimed at removing this assumption completely. The authors of also achieve an access overhead of O(\log^3 n), which is just a log-factor away from the best known ORAM access overhead. While most of the earlier works focus on proving security computationally, there are more recent works that use the stronger statistical notion of security. One of the only known lower bounds on the access overhead of ORAMs is due to Goldreich et al. They show a \Omega(\log) lower bound for ORAM access overhead, where n is the data size. There is also a conditional lower bound on the access overhead of ORAMs due to Boyle et al. that relates this quantity with that of the size of sorting networks.


ORAM constructions


Trivial construction

A trivial ORAM simulator construction, for each read or write operation, reads from and writes to every single element in the array, only performing a meaningful action for the address specified in that single operation. The trivial solution thus, scans through the entire memory for each operation. This scheme incurs a time overhead of \Omega(n) for each memory operation, where is the size of the memory.


A simple ORAM scheme

A simple version of a statistically secure ORAM compiler constructed by Chung and Pass is described in the following along with an overview of the proof of its correctness. The compiler on input and a program with its memory requirement , outputs an equivalent oblivious program . If the input program uses registers, the output program will need r+n/+\text\log registers, where \alpha>1 is a parameter of the construction. uses O(n \text \log n) memory and its (worst-case) access overhead is O(\text\log n). The ORAM compiler is very simple to describe. Suppose that the original program has instructions for basic mathematical and control operations in addition to two special instructions \mathsf(l) and \mathsf(l,v), where \mathsf(l) reads the value at location and \mathsf(l,v) writes the value to . The ORAM compiler, when constructing , simply replaces each and instructions with subroutines and and keeps the rest of the program the same. It may be noted that this construction can be made to work even for memory requests coming in an
online In computer technology and telecommunications, online indicates a state of connectivity and offline indicates a disconnected state. In modern terminology, this usually refers to an Internet connection, but (especially when expressed "on line" or ...
fashion.


Memory organization of the oblivious program

The program stores a complete binary tree of depth d=\log (n/\alpha) in its memory. Each node in is represented by a binary string of length at most . The root is the empty string, denoted by . The left and right children of a node represented by the string \gamma are \gamma_0 and \gamma_1 respectively. The program thinks of the memory of as being partitioned into blocks, where each block is a contiguous sequence of memory cells of size . Thus, there are at most \lceil n /\alpha \rceil blocks in total. In other words, the memory cell corresponds to block b=\lfloor r/\alpha \rfloor. At any point of time, there is an association between the blocks and the leaves in . To keep track of this association, also stores a data structure called position map, denoted by Pos, using O(n/\alpha) registers. This data structure, for each block , stores the leaf of associated with in Pos(b). Each node in contains an array with at most triples. Each triple is of the form (b,Pos(b),v), where is a block identifier and is the contents of the block. Here, is a security parameter and is O(\text \log n).


Description of the oblivious program

The program starts by initializing its memory as well as registers to . Describing the procedures and is enough to complete the description of . The sub-routine is given below. The inputs to the sub-routine are a memory location l \in /math> and the value to be stored at the location . It has three main phases, namely FETCH, PUT_BACK and FLUSH. input: a location , a value Procedure FETCH // Search for the required block. b\leftarrow \lfloor l/ \alpha \rfloor // is the block containing . i\leftarrow l\mod \alpha // is 's component in the block . pos\leftarrow Pos(b) if pos =\perp then pos\leftarrow_R / \alpha/math>. // Set pos to a uniformly random leaf in . flag \leftarrow 0. for each node on the path from root to pos do if has a triple of the form (b,pos,x) then Remove (b,pos,x) from , store in a register, and write back the updated to . flag \leftarrow 1. else Write back to . Procedure PUT_BACK // Add back the updated block at the root. pos'\leftarrow_R / \alpha/math>. // Set pos' to a uniformly random leaf in . if flag=1 then Set x' to be same as except for at the -th position. else Set x' to be a block with at -th position and 's everywhere else. if there is space left in the root then Add the triple (b,pos',x') to the root of . else Abort outputting overflow. Procedure FLUSH // Push the blocks present in a random path as far down as possible. pos^*\leftarrow_R / \alpha/math>. // Set pos^* to a uniformly random leaf in . for each triple (b'',pos'',v'') in the nodes traversed the path from root to pos^* Push down this triple to the node that corresponds to the longest common prefix of pos'' and pos^*. if at any point some bucket is about to overflow then Abort outputting overflow. The task of the FETCH phase is to look for the location in the tree . Suppose pos is the leaf associated with the block containing location . For each node in on the path from root to pos, this procedure goes over all triples in and looks for the triple corresponding to the block containing . If it finds that triple in , it removes the triple from and writes back the updated state of . Otherwise, it simply writes back the whole node . In the next phase, it updates the block containing with the new value , associates that block with a freshly sampled uniformly random leaf of the tree, writes back the updated triple to the root of . The last phase, which is called FLUSH, is an additional operation to release the memory cells in the root and other higher internal nodes. Specifically, the algorithm chooses a uniformly random leaf pos^* and then tries to push down every node as much as possible along the path from root to pos^*. It aborts outputting an overflow if at any point some bucket is about to overflow its capacity. The sub-routine is similar to . For the sub-routine, the input is just a memory location l \in /math> and it is almost the same as . In the FETCH stage, if it does not find a triple corresponding to the location , it returns as the value at location . In the PUT_BACK phase, it will write back the same block that it read to the root, after associating it with a freshly sampled uniformly random leaf.


Correctness of the simple ORAM scheme

Let stand for the ORAM compiler that was described above. Given a program , let denote C(\Pi). Let \Pi(n,x) denote the execution of the program on an input using memory cells. Also, let \tilde(n,x) denote the memory access pattern of \Pi(n,x). Let denote a function such that for any n \in \mathbb, for any program and for any input x \in \^*, the probability that \Pi'(n,x) outputs an overflow is at most \mu(n). The following lemma is easy to see from the description of . ;Equivalence Lemma: Let n \in \mathbb and x \in \^*. Given a program , with probability at least 1 - \mu(n), the output of \Pi'(n,x) is identical to the output of \Pi(n,x). It is easy to see that each and operation traverses root to leaf paths in chosen uniformly and independently at random. This fact implies that the distribution of memory access patterns of any two programs that make the same number of memory accesses are indistinguishable, if they both do not overflow. ;Obliviousness Lemma: Given two programs and and two inputs x_1,x_2 \in \^* such that , \tilde(x_1,n), = , \tilde(x_2,n), , with probability at least 1 - 2\mu(n), the access patterns \tilde(x_1,n) and \tilde(x_2,n) are identical. The following lemma completes the proof of correctness of the ORAM scheme. ;Overflow Lemma:There exists a negligible function such that for every program , every and input , the program \Pi'(n,x) outputs overflow with probability at most \mu(n).


Computational and memory overheads

During each and operation, two random root-to-leaf paths of are fully explored by . This takes O(K\cdot\log (n/\alpha)) time. This is the same as the computational overhead, and is O(\text\log n) since is O(\text\log n). The total memory used up by is equal to the size of . Each triple stored in the tree has \alpha + 2 words in it and thus there are K(\alpha + 2) words per node of the tree. Since the total number of nodes in the tree is O(n/\alpha), the total memory size is O(nK) words, which is O(n\text\log n). Hence, the memory overhead of the construction is O(\text\log n).


References

{{reflist


See also

* Oblivious data structure *
Cache-oblivious algorithm In computing, a cache-oblivious algorithm (or cache-transcendent algorithm) is an algorithm designed to take advantage of a processor cache without having the size of the cache (or the length of the cache lines, etc.) as an explicit parameter. An o ...
Models of computation Cache (computing) Analysis of algorithms