Log-structured File System (BSD)
   HOME

TheInfoList



OR:

The Log-Structured File System (or LFS) is an implementation of a
log-structured file system A log-structured filesystem is a file system in which data and metadata are written sequentially to a circular buffer, called a log. The design was first proposed in 1988 by John K. Ousterhout and Fred Douglis and first implemented in 1992 by ...
(a concept originally proposed and implemented by
John Ousterhout John Kenneth Ousterhout (, born October 15, 1954) is a professor of computer science at Stanford University. He founded Electric Cloud with John Graham-Cumming. Ousterhout was a professor of computer science at University of California, Berkeley ...
), originally developed for
BSD The Berkeley Software Distribution or Berkeley Standard Distribution (BSD) is a discontinued operating system based on Research Unix, developed and distributed by the Computer Systems Research Group (CSRG) at the University of California, Berk ...
. It was removed from FreeBSD and OpenBSD; the NetBSD implementation was nonfunctional until work leading up to the 4.0 release made it viable again as a production file system.


Design

Most of the on-disk format of LFS is borrowed from UFS. The indirect block, inode and directory formats are almost identical. This allows well-tested UFS file system code to be re-used; current implementations of LFS share the higher-level UFS code with the lower-level code for FFS, since both of these file systems share much in common with UFS. LFS divides the disk into ''segments'', only one of which is active at any one time. Each segment has a header called a ''summary block''. Each summary block contains a pointer to the next summary block, linking segments into one long chain that LFS treats as a linear log. The segments do not necessarily have to be adjacent to each other on disk; for this reason, larger segment sizes (between 384KB and 1MB) are recommended because they amortize the cost of seeking between segments.. Whenever a file or directory is changed, LFS writes to the head of this log: # Any changed or new data blocks. # Indirect blocks updated to point to (1). # Inodes updated to point to (2). # Inode map blocks updated to point at (3).. Unlike UFS, inodes in LFS do not have fixed locations. An inode map—a flat list of inode block locations—is used to track them. As with everything else, inode map blocks are also written to the log when they are changed. When a segment is filled, LFS goes on to fill the next free or ''clean'' segment. Segments are said to be ''dirty'' if they contain ''live'' blocks, or blocks for which no newer copies exist further ahead in the log. The LFS ''
garbage collector A waste collector, also known as a garbageman, garbage collector, trashman (in the US), binman or (rarely) dustman (in the UK), is a person employed by a public or private enterprise to collect and dispose of municipal solid waste (refuse) and r ...
'' turns dirty segments into ''clean'' ones by copying live blocks from the dirty segment into the current segment and skipping the rest. The summary block in each segment contains a map to track live blocks. Generally, garbage collection is delayed until there are no clean segments left; it can also be deferred for when the system is idle. Even then, only the least-dirty segments are picked for collection. This is intended to avoid the penalty of cleaning full segments when I/O bandwidth is most needed. At a ''checkpoint'' (usually scheduled about once every 30 seconds), LFS writes the last known block locations of the inode map and the number of the current segment to a ''checkpoint region'' at a fixed place on disk. There are two such regions; LFS alternates between them each checkpoint. Once written, a ''checkpoint'' represents the last consistent snapshot of the file system. Recovery after a crash and normal mounting work the same way—the file system simply reconstructs its state from the last checkpoint and resumes logging from there.


Disadvantages

* There can be severe
file system fragmentation In computing, file system fragmentation, sometimes called file system aging, is the tendency of a file system to lay out the contents of files non-continuously to allow in-place modification of their contents. It is a special case of data fragmen ...
in LFS, especially for slowly growing files or multiple simultaneous large writes. This inflicts a severe performance penalty, even though the
design rationale A design rationale is an explicit documentation of the reasons behind decisions made when designing a system or artifact. As initially developed by W.R. Kunz and Horst Rittel, design rationale seeks to provide argumentation-based structure to ...
for log-structured file systems assumes disk reads will mostly be cached away. * LFS becomes progressively less efficient as it nears maximum capacity, when the garbage collector has to run almost constantly to make clean segments available. * LFS does not allow snapshotting or versioning, even though both features are trivial to implement in general on log-structured file systems.


Notes

* *


References

{{File systems Disk file systems