HOME

TheInfoList



OR:

Information flow in an information theoretical context is the transfer of information from a
variable Variable may refer to: * Variable (computer science), a symbolic name associated with a value and whose associated value may be changed * Variable (mathematics), a symbol that represents a quantity in a mathematical expression, as used in many ...
x to a variable y in a given
process A process is a series or set of activities that interact to produce a result; it may occur once-only or be recurrent or periodic. Things called a process include: Business and management *Business process, activities that produce a specific se ...
. Not all flows may be desirable; for example, a system should not leak any confidential information (partially or not) to public observers--as it is a violation of privacy on an individual level, or might cause major loss on a corporate level.


Introduction

Securing the data manipulated by computing systems has been a challenge in the past years. Several methods to limit the information disclosure exist today, such as
access control list In computer security, an access-control list (ACL) is a list of permissions associated with a system resource (object). An ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on gi ...
s, firewalls, and
cryptography Cryptography, or cryptology (from grc, , translit=kryptós "hidden, secret"; and ''graphein'', "to write", or ''-logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of adver ...
. However, although these methods do impose limits on the information that is released by a system, they provide no guarantees about information ''propagation''.Andrei Sabelfeld and Andrew C. Myers. Language-Based Information-Flow Security. IEEE Journal on Selected Areas in Communications, 21(1), Jan. 2003. For example, access control lists of file systems prevent unauthorized file access, but they do not control how the data is used afterwards. Similarly, cryptography provides a means to exchange information privately across a non-secure channel, but no guarantees about the confidentiality of the data are given once it is decrypted. In low level information flow analysis, each variable is usually assigned a security level. The basic model comprises two distinct levels: low and high, meaning, respectively, publicly observable information, and secret information. To ensure confidentiality, flowing information from high to low variables should not be allowed. On the other hand, to ensure integrity, flows to high variables should be restricted. More generally, the security levels can be viewed as a
lattice Lattice may refer to: Arts and design * Latticework, an ornamental criss-crossed framework, an arrangement of crossing laths or other thin strips of material * Lattice (music), an organized grid model of pitch ratios * Lattice (pastry), an orna ...
with information flowing only upwards in the lattice.Dorothy Denning. A lattice model of secure information flow. Communications of the ACM, 19(5):236-242, 1976. For example, considering two security levels L and H (low and high), if L \le H, flows from L to L, from H to H, and L to H would be allowed, while flows from H to L would not. Throughout this article, the following notation is used: * variable l \in L (low) shall denote a publicly observable variable * variable h \in H (high) shall denote a secret variable Where L and H are the only two security levels in the
lattice Lattice may refer to: Arts and design * Latticework, an ornamental criss-crossed framework, an arrangement of crossing laths or other thin strips of material * Lattice (music), an organized grid model of pitch ratios * Lattice (pastry), an orna ...
being considered.


Explicit flows and side channels

Information flows can be divided in two major categories. The simplest one is explicit flow, where some secret is explicitly leaked to a publicly observable variable. In the following example, the secret in the variable ''h'' flows into the publicly observable variable ''l''. var l, h l := h The other flows fall into the side channel category. For example, in the
timing attack In cryptography, a timing attack is a side-channel attack in which the attacker attempts to compromise a cryptosystem by analyzing the time taken to execute cryptographic algorithms. Every logical operation in a computer takes time to execute, and ...
or in the power analysis attack, the system leaks information through, respectively, the time or power it takes to perform an action depending on a secret value. In the following example, the attacker can deduce if the value of ''h'' is one or not by the time the program takes to finish: var l, h if h = 1 then (* do some time-consuming work *) l := 0 Another side channel flow is the implicit information flow, which consists in leakage of information through the program control flow. The following program (implicitly) discloses the value of the secret variable ''h'' to the variable ''l''. In this case, since the ''h'' variable is boolean, all the bits of the variable of ''h'' is disclosed (at the end of the program, ''l'' will be 3 if ''h'' is true, and 42 otherwise). var l, h if h = true then l := 3 else l := 42


Non-interference

Non-interference is a policy that enforces that an attacker should not be able to distinguish two computations from their outputs if they only vary in their secret inputs. However, this policy is too strict to be usable in realistic programs. The classic example is a password checker program that, in order to be useful, needs to disclose some secret information: whether the input password is correct or not (note that the information that an attacker learns in case the program ''rejects'' the password is that the attempted password is ''not'' the valid one).


Information flow control

A mechanism for ''information flow control'' is one that enforces information flow policies. Several methods to enforce information flow policies have been proposed. Run-time mechanisms that tag data with information flow labels have been employed at the operating system level and at the programming language level. Static program analyses have also been developed that ensure information flows within programs are in accordance with policies. Both static and dynamic analysis for current programming languages have been developed. However, dynamic analysis techniques cannot observe all execution paths, and therefore cannot be both sound and precise. In order to guarantee noninterference, they either terminate executions that might release sensitive information or they ignore updates that might leak information. A prominent way to enforce information flow policies in a program is through a security type system: that is, a type system that enforces security properties. In such a sound type system, if a program type-checks, it meets the flow policy and therefore contains no improper information flows.


Security type system

In a programming language augmented with a security
type system In computer programming, a type system is a logical system comprising a set of rules that assigns a property called a type to every "term" (a word, phrase, or other set of symbols). Usually the terms are various constructs of a computer progr ...
every expression carries both a type (such as boolean, or integer) and a security label. Following is a simple security type system from that enforces non-interference. The notation \;\vdash exp\;:\; \tau means that the expression exp has type \;\tau. Similarly, c\vdash C means that the command C is typable in the security context sc. 1-2\quad \vdash exp : high \qquad \frac 1-3\quad c\vdash \textbf \qquad c\vdash h \;:=\; exp \qquad \frac 4-5\quad \frac \qquad \frac 6-7\quad \frac \qquad \frac Well-typed commands include, for example, : ow\vdash\ \textbf\ l = 42\ \textbf\ h\;:=\; 3\ \textbf\ l\;:=\;0. Conversely, the program :l\;:=\;0\ ;\ \textbf\ l < h\ \textbf\ l\;:=\;l + 1 is ill-typed, as it will disclose the value of variable h into l. Note that the rule 7/math> is a subsumption rule, which means that any command that is of security type high can also be low. For example, h:=1 can be both high and low. This is called polymorphism in
type theory In mathematics, logic, and computer science, a type theory is the formal presentation of a specific type system, and in general type theory is the academic study of type systems. Some type theories serve as alternatives to set theory as a fou ...
. Similarly, the type of an expression exp that satisfies h \notin Vars(exp) can be both high and low according to 1/math> and 2/math> respectively.


Declassification

As shown previously, non-interference policy is too strict for use in most real-world applications.S. Zdancewic. Challenges for information-flow security. In Workshop on the Programming Language Interference and Dependence (PLID’04) 2004. Therefore, several approaches to allow controlled releases of information have been devised. Such approaches are called information declassification. Robust declassification requires that an active attacker may not manipulate the system in order to learn more secrets than what passive attackers already know.Andrei Sabelfeld and David Sands. Dimensions and Principles of Declassification. In Proc. of the IEEE Computer Security Foundations Workshop, 2005. Information declassification constructs can be classified in four orthogonal dimensions: ''What'' information is released, ''Who'' is authorized to access the information, ''Where'' the information is released, and ''When'' is the information released.


What

A ''what'' declassification policy controls which information (partial or not) may be released to a publicly observable variable. The following code example shows a declassify construct from.A. Sabelfeld and A. C. Myers. A model for delimited information release. In Proc. of International Symposium on Software Security (ISSS) 2003. In this code, the value of the variable ''h'' is explicitly allowed by the programmer to flow into the publicly observable variable ''l''. var l, h if l = 1 then l := declassify(h)


Who

A ''who'' declassification policy controls which principals (i.e., who) can access a given piece of information. This kind of policy has been implemented in the Jif compiler.Jif: Java information flow
/ref> The following example allows Bob to share its secret contained in the variable ''b'' with Alice through the commonly accessible variable ''ab''. var ab ''(* *)'' var b ''(* *)'' if ab = 1 then ab := declassify(b, ) ''(* *)''


Where

A ''where'' declassification policy regulates where the information can be released, for example, by controlling in which lines of the source code information can be released. The following example makes use of the flow construct proposed in.A. Almeida Matos and G. Boudol. On declassification and the non-disclosure policy. In Proc. IEEE Computer Security Foundations Workshop 2005. This construct takes a flow policy (in this case, variables in H are allowed to flow to variables in L) and a command, which is run under the given flow policy. var l, h flow H \prec L in l := h


When

A ''when'' declassification policy regulates when the information can be released. Policies of this kind can be used to verify programs that implement, for example, controlled release of secret information after payment, or encrypted secrets which should not be released in a certain time given polynomial computational power.


Declassification approaches for implicit flows

An implicit flow occurs when code whose conditional execution is based on private information updates a public variable. This is especially problematic when multiple executions are considered since an attacker could leverage the public variable to infer private information by observing how its value changes over time or with the input.


The naïve approach

The naïve approach consists on enforcing the confidentiality property on all variables whose value is affected by other variables. This method leads to partially leaked information due to on some instances of the application a variable is Low and in others High.


No sensitive upgrade

No sensitive upgrade halts the program whenever a High variable affects the value of a Low variable effectively preventing information leakage. Since it simply looks for expressions where an information leakage might happen without looking at the context it may halt a program that despite having potential information leakage it never actually leaks information. In the following example x is High and y is Low. var x, y y := false if x = true then y := true return true In this case the program would be halted since it uses the value of a High variable to change a Low variable despite the program never leaking information.


Permissive upgrade

Permissive-upgrade introduces an extra security class P which will identify information leaking variables. When a High variable affects the value of a Low variable, the latter is labeled P. If a P labeled variable affects a Low variable the program would be halted. To prevent the halting the Low and P variables should be converted to High using a privatization function to ensure no information leakage can occur. On subsequent instances the program will run without interruption.


Privatization inference

Privatization inference extends permissive upgrade to automatically apply the privatization function to any variable that might leak information. This method should be used during testing where it will convert most variables. Once the program moves into production the permissive-upgrade should be used to halt the program in case of an information leakage and the privatization functions can be updated to prevent subsequent leaks.


Application in computer systems

Beyond applications to programming language, information flow control theories have been applied to OS,M. Krohn, A. Yip, M. Brodsky, N. Cliffer, M. Kaashoek, E. Kohler and R. Morris. Information flow control for standard OS abstractions. In ACM Special Interest Group on Operating Systems (SIGOPS) Symposium on Operating systems principles 2007. Distributed Systems N. Zeldovich, S. Boyd-Wickizer and D. Mazieres. Securing Distributed Systems with Information Flow Control. In USENIX Symposium on Networked Systems Design and Implementation 2008. and Cloud Computing.J. Bacon, D. Eyers, T. Pasquier, J. Singh, I. Papagiannis and P. Pietzuch. Information Flow Control for secure cloud computing. In IEEE Transactions on Network and Service Management 2014.


References

{{DEFAULTSORT:Information Flow (Information Theory) Information theory