Log-structured File System (BSD)
   HOME

TheInfoList



OR:

The Log-Structured File System (or LFS) is an implementation of a log-structured file system (a concept originally proposed and implemented by
John Ousterhout John Kenneth Ousterhout (, born October 15, 1954) is an American computer scientist. He is a professor of computer science at Stanford University. He founded Electric Cloud with John Graham-Cumming. Ousterhout was previously a professor of com ...
), originally developed for
BSD The Berkeley Software Distribution (BSD), also known as Berkeley Unix or BSD Unix, is a discontinued Unix operating system developed and distributed by the Computer Systems Research Group (CSRG) at the University of California, Berkeley, beginni ...
. It was removed from
FreeBSD FreeBSD is a free-software Unix-like operating system descended from the Berkeley Software Distribution (BSD). The first version was released in 1993 developed from 386BSD, one of the first fully functional and free Unix clones on affordable ...
and
OpenBSD OpenBSD is a security-focused operating system, security-focused, free software, Unix-like operating system based on the Berkeley Software Distribution (BSD). Theo de Raadt created OpenBSD in 1995 by fork (software development), forking NetBSD ...
; the
NetBSD NetBSD is a free and open-source Unix-like operating system based on the Berkeley Software Distribution (BSD). It was the first open-source BSD descendant officially released after 386BSD was fork (software development), forked. It continues to ...
implementation was nonfunctional until work leading up to the 4.0 release made it viable again as a production file system.


Design

Most of the on-disk format of LFS is borrowed from UFS. The indirect block,
inode An inode (index node) is a data structure in a Unix-style file system that describes a file-system object such as a file or a directory. Each inode stores the attributes and disk block locations of the object's data. File-system object attribu ...
and directory formats are almost identical. This allows well-tested UFS file system code to be re-used; current implementations of LFS share the higher-level UFS code with the lower-level code for FFS, since both of these file systems share much in common with UFS. LFS divides the disk into ''segments'', only one of which is active at any one time. Each segment has a header called a ''summary block''. Each summary block contains a pointer to the next summary block, linking segments into one long chain that LFS treats as a linear log. The segments do not necessarily have to be adjacent to each other on disk; for this reason, larger segment sizes (between 384KB and 1MB) are recommended because they amortize the cost of seeking between segments.. Whenever a file or directory is changed, LFS writes to the head of this log: # Any changed or new data blocks. # Indirect blocks updated to point to (1). # Inodes updated to point to (2). # Inode map blocks updated to point at (3).. Unlike UFS, inodes in LFS do not have fixed locations. An inode map—a flat list of inode block locations—is used to track them. As with everything else, inode map blocks are also written to the log when they are changed. When a segment is filled, LFS goes on to fill the next free or ''clean'' segment. Segments are said to be ''dirty'' if they contain ''live'' blocks, or blocks for which no newer copies exist further ahead in the log. The LFS '' garbage collector'' turns dirty segments into ''clean'' ones by copying live blocks from the dirty segment into the current segment and skipping the rest. The summary block in each segment contains a map to track live blocks. Generally, garbage collection is delayed until there are no clean segments left; it can also be deferred for when the system is idle. Even then, only the least-dirty segments are picked for collection. This is intended to avoid the penalty of cleaning full segments when I/O bandwidth is most needed. At a ''checkpoint'' (usually scheduled about once every 30 seconds), LFS writes the last known block locations of the inode map and the number of the current segment to a ''checkpoint region'' at a fixed place on disk. There are two such regions; LFS alternates between them each checkpoint. Once written, a ''checkpoint'' represents the last consistent snapshot of the file system. Recovery after a crash and normal mounting work the same way—the file system simply reconstructs its state from the last checkpoint and resumes logging from there.


Disadvantages

* There can be severe
file system fragmentation In computing, file system fragmentation, sometimes called file system aging, is the tendency of a file system to lay out the contents of Computer file, files non-continuously to allow in-place modification of their contents. It is a special case of ...
in LFS, especially for slowly growing files or multiple simultaneous large writes. This inflicts a severe performance penalty, even though the
design rationale A design rationale is an explicit documentation of the reasons behind decisions made when designing a system or artifact. As initially developed by W.R. Kunz and Horst Rittel, design rationale seeks to provide argumentation-based structure t ...
for log-structured file systems assumes disk reads will mostly be cached away. * LFS becomes progressively less efficient as it nears maximum capacity, when the garbage collector has to run almost constantly to make clean segments available. * LFS does not allow snapshotting or versioning, even though both features are trivial to implement in general on log-structured file systems.


Notes

* *


References

{{File systems Disk file systems