Differentially Private Analysis Of Graphs
   HOME

TheInfoList



OR:

Differentially private analysis of graphs studies algorithms for computing accurate graph statistics while preserving
differential privacy Differential privacy (DP) is a system for publicly sharing information about a dataset by describing the patterns of groups within the dataset while withholding information about individuals in the dataset. The idea behind differential privacy is t ...
. Such algorithms are used for data represented in the form of a graph where nodes correspond to individuals and edges correspond to relationships between them. For examples, edges could correspond to friendships, sexual relationships, or communication patterns. A party that collected sensitive graph data can process it using a differentially private algorithm and publish the output of the algorithm. The goal of differentially private analysis of graphs is to design algorithms that compute accurate global information about graphs while preserving privacy of individuals whose data is stored in the graph.


Variants

Differential privacy Differential privacy (DP) is a system for publicly sharing information about a dataset by describing the patterns of groups within the dataset while withholding information about individuals in the dataset. The idea behind differential privacy is t ...
imposes a restriction on the algorithm. Intuitively, it requires that the algorithm has roughly the same output distribution on neighboring inputs. If the input is a graph, there are two natural notions of neighboring inputs, edge neighbors and node neighbors, which yield two natural variants of differential privacy for graph data. Let ε be a positive real number and \mathcal be a
randomized algorithm A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performan ...
that takes a graph as input and returns an output from a set \mathcal. The algorithm \mathcal is \epsilon-differentially private if, for all neighboring graphs G_1 and G_2 and all subsets S of \mathcal,
\Pr mathcal(G_1) \in S\leq e^ \times \Pr mathcal(G_2) \in S
where the probability is taken over the randomness used by the algorithm.


Edge differential privacy

Two graphs are edge neighbors if they differ in one edge. An algorithm is \epsilon-edge-differentially private if, in the definition above, the notion of edge neighbors is used. Intuitively, an edge differentially private algorithm has similar output distributions on any pair of graphs that differ in one edge, thus protecting changes to graph edges.


Node differential privacy

Two graphs are node neighbors if one can be obtained from the other by deleting a node and its adjacent edges. An algorithm is \epsilon-node-differentially private if, in the definition above, the notion of node neighbors is used. Intuitively, a node differentially private algorithm has similar output distributions on any pair of graphs that differ in one one nodes and edges adjacent to it, thus protecting information pertaining to each individual. Node differential privacy give a stronger privacy protection than edge differential privacy.


Research history

The first edge differentially private algorithm was designed by Nissim, Raskhodnikova, and Smith. The distinction between edge and node differential privacy was first discussed by Hay, Miklau, and Jensen. However, it took several years before first node differentially private algorithms were published in Blocki et al., Kasiviswanathan et al., and Chen and Zhou. In all three papers, the algorithms are for releasing a single statistic, like a triangle count or counts of other subgraphs. Raskhodnikova and Smith gave the first node differentially private algorithm for releasing a vector, specifically, the degree count and the degree distribution.


References

{{reflist Information privacy Differential privacy