In
information technology, a backup, or data backup is a copy of
computer data taken and stored elsewhere so that it may be used to restore the original after a
data loss event. The verb form, referring to the process of doing so, is "
back up", whereas the noun and adjective form is "
backup
In information technology, a backup, or data backup is a copy of computer data taken and stored elsewhere so that it may be used to restore the original after a data loss event. The verb form, referring to the process of doing so, is "back up", w ...
".
Backups can be used to recover data after its loss from
data deletion or
corruption
Corruption is a form of dishonesty or a criminal offense which is undertaken by a person or an organization which is entrusted in a position of authority, in order to acquire illicit benefits or abuse power for one's personal gain. Corruption m ...
, or to recover data from an earlier time.
Backups provide a simple form of
disaster recovery
Disaster recovery is the process of maintaining or reestablishing vital infrastructure and systems following a natural or human-induced disaster, such as a storm or battle.It employs policies, tools, and procedures. Disaster recovery focuses on t ...
; however not all backup systems are able to reconstitute a computer system or other complex configuration such as a
computer cluster
A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software.
The comp ...
,
active directory server, or
database server.
A backup system contains at least one copy of all data considered worth saving. The
data storage
Data storage is the recording (storing) of information (data) in a storage medium. Handwriting, phonographic recording, magnetic tape, and optical discs are all examples of storage media. Biological molecules such as RNA and DNA are conside ...
requirements can be large. An
information repository model may be used to provide structure to this storage. There are different types of
data storage device
Data storage is the recording (storing) of information (data) in a storage medium. Handwriting, phonographic recording, magnetic tape, and optical discs are all examples of storage media. Biological molecules such as RNA and DNA are conside ...
s used for copying backups of data that is already in secondary storage onto
archive files.
[In contrast to everyday use of the term "archive", the data stored in an "archive file" is not necessarily old or of historical interest.] There are also different ways these devices can be arranged to provide geographic dispersion,
data security, and
portability
Portability may refer to:
*Portability (social security), the portability of social security benefits
* Porting, the ability of a computer program to be ported from one system to another in computer science
** Software portability, the portability ...
.
Data is selected, extracted, and manipulated for storage. The process can include methods for
dealing with live data, including open files, as well as compression, encryption, and
de-duplication. Additional techniques apply to
enterprise client-server backup. Backup schemes may include
dry runs that validate the reliability of the data being backed up. There are limitations
and human factors involved in any backup scheme.
Storage
A backup strategy requires an information repository, "a secondary storage space for data"
that aggregates backups of data "sources". The repository could be as simple as a list of all backup media (DVDs, etc.) and the dates produced, or could include a computerized index, catalog, or
relational database
A relational database is a (most commonly digital) database based on the relational model of data, as proposed by E. F. Codd in 1970. A system used to maintain relational databases is a relational database management system (RDBMS). Many relatio ...
.
The backup data needs to be stored, requiring a
backup rotation scheme
A backup rotation scheme is a system of backing up data to computer media (such as tapes) that minimizes, by re-use, the number of media used. The scheme determines how and when each piece of removable storage is used for a backup job and how long ...
,
which is a system of backing up data to computer media that limits the number of backups of different dates retained separately, by appropriate re-use of the data storage media by overwriting of backups no longer needed. The scheme determines how and when each piece of removable storage is used for a backup operation and how long it is retained once it has backup data stored on it. The 3-2-1 rule can aid in the backup process. It states that there should be at least 3 copies of the data, stored on 2 different types of storage media, and one copy should be kept offsite, in a remote location (this can include
cloud storage
Cloud storage is a model of computer data storage in which the digital data is stored in logical pools, said to be on "the cloud". The physical storage spans multiple servers (sometimes in multiple locations), and the physical environment is t ...
). 2 or more different media should be used to eliminate data loss due to similar reasons (for example, optical discs may tolerate being underwater while LTO tapes may not, and SSDs cannot fail due to
head crash
A head crash is a hard-disk failure that occurs when a read–write head of a hard disk drive makes contact with its rotating platter, slashing its surface and permanently damaging its magnetic media. It is most often caused by a sudden seve ...
es or damaged spindle motors since they don't have any moving parts, unlike hard drives). An offsite copy protects against fire, theft of physical media (such as tapes or discs) and natural disasters like floods and earthquakes. Disaster protected hard drives like those made by
ioSafe are an alternative to an offsite copy, but they have limitations like only being able to resist fire for a limited period of time, so an offsite copy still remains as the ideal choice.
Backup methods
Unstructured
An unstructured repository may simply be a stack of tapes, DVD-Rs or external HDDs with minimal information about what was backed up and when. This method is the easiest to implement, but unlikely to achieve a high level of recoverability as it lacks automation.
Full only/System imaging
A repository using this backup method contains complete source data copies taken at one or more specific points in time. Copying
system image
In computing, a system image is a serialized copy of the entire state of a computer system stored in some non-volatile form such as a file. A system is said to be capable of using system images if it can be shut down and later restored to exactly ...
s, this method is frequently used by computer technicians to record known good configurations. However, imaging is generally more useful as a way of deploying a standard configuration to many systems rather than as a tool for making ongoing backups of diverse systems.
Incremental
An
incremental backup
An incremental backup is one in which successive copies of the data contain only the portion that has changed since the preceding backup copy was made. When a full recovery is needed, the restoration process would need the last full backup plus al ...
stores data changed since a reference point in time. Duplicate copies of unchanged data aren't copied. Typically a full backup of all files is once or at infrequent intervals, serving as the reference point for an incremental repository. Subsequently, a number of incremental backups are made after successive time periods. Restores begin with the last full backup and then apply the incrementals.
Some backup systems
can create a from a series of incrementals, thus providing the equivalent of frequently doing a full backup. When done to modify a single archive file, this speeds restores of recent versions of files.
Near-CDP
Continuous Data Protection (CDP) refers to a backup that instantly saves a copy of every change made to the data. This allows restoration of data to any point in time and is the most comprehensive and advanced data protection.
Near-CDP backup applications—often
marketed as "CDP"—automatically take incremental backups at a specific interval, for example every 15 minutes, one hour, or 24 hours. They can therefore only allow restores to an interval boundary.
Near-CDP backup applications use
journaling and are typically based on periodic "snapshots",
read-only copies of the data frozen at a particular
point in time
Time is the continued sequence of existence and events that occurs in an apparently irreversible succession from the past, through the present, into the future. It is a component quantity of various measurements used to sequence events, to co ...
.
Near-CDP (except for
Apple Time Machine
Time Machine is the backup mechanism of macOS, the desktop operating system developed by Apple. The software is designed to work with both local storage devices and network-attached disks, and is most commonly used with external disk drives co ...
)
intent-logs every change on the host system,
often by saving byte or block-level differences rather than file-level differences. This backup method differs from simple
disk mirroring in that it enables a roll-back of the log and thus a restoration of old images of data. Intent-logging allows precautions for the consistency of live data, protecting ''self-consistent'' files but requiring ''applications'' "be quiesced and made ready for backup."
Near-CDP is more practicable for ordinary personal backup applications, as opposed to ''true'' CDP, which must be run in conjunction with a virtual machine
or equivalent
and is therefore generally used in enterprise client-server backups.
Software may create copies of individual files such as written documents, multimedia projects, or user preferences, to prevent failed write events caused by power outages, operating system crashes, or exhausted disk space, from causing data loss. A common implementation is an appended
".bak" extension to the
file name.
Reverse incremental
A
Reverse incremental backup method stores a recent archive file "mirror" of the source data and a series of differences between the "mirror" in its current state and its previous states. A reverse incremental backup method starts with a non-image full backup. After the full backup is performed, the system periodically synchronizes the full backup with the live copy, while storing the data necessary to reconstruct older versions. This can either be done using
hard links—as Apple Time Machine does, or using binary
diffs.
Differential
A
differential backup saves only the data that has changed since the last full backup. This means a maximum of two backups from the repository are used to restore the data. However, as time from the last full backup (and thus the accumulated changes in data) increases, so does the time to perform the differential backup. Restoring an entire system requires starting from the most recent full backup and then applying just the last differential backup.
A differential backup copies files that have been created or changed since the last full backup, regardless of whether any other differential backups have been made since, whereas an incremental backup copies files that have been created or changed since the most recent backup of any type (full or incremental). Changes in files may be detected through a more recent date/time of last modification
file attribute, and/or changes in file size. Other variations of incremental backup include multi-level incrementals and block-level incrementals that compare parts of files instead of just entire files.
Storage media
Regardless of the repository model that is used, the data has to be copied onto an archive file data storage medium. The medium used is also referred to as the type of backup destination.
Magnetic tape
Magnetic tape
Magnetic tape is a medium for magnetic storage made of a thin, magnetizable coating on a long, narrow strip of plastic film. It was developed in Germany in 1928, based on the earlier magnetic wire recording from Denmark. Devices that use magne ...
was for a long time the most commonly used medium for bulk data storage, backup, archiving, and interchange. It was previously a less expensive option, but this is no longer the case for smaller amounts of data.
Tape is a
sequential access medium, so the rate of continuously writing or reading data can be very fast. While tape media itself has a low cost per space,
tape drives are typically dozens of times as expensive as
hard disk drives and
optical drives.
Many tape formats have been proprietary or specific to certain markets like mainframes or a particular brand of personal computer. By 2014
LTO
LTO may refer to:
Science and technology
* Linear Tape-Open, a computer storage magnetic tape format
* Link-time optimization, a technique used by compilers to optimize software
* Low Temperature Oxide, a form of silicon dioxide used in microfabri ...
had become the primary tape technology.
The other remaining viable "super" format is the
IBM 3592 (also referred to as the TS11xx series). The
Oracle StorageTek T10000 was discontinued in 2016.
Hard disk
The use of
hard disk
A hard disk drive (HDD), hard disk, hard drive, or fixed disk is an electro-mechanical data storage device that stores and retrieves digital data using magnetic storage with one or more rigid rapidly rotating platters coated with magnet ...
storage has increased over time as it has become progressively cheaper. Hard disks are usually easy to use, widely available, and can be accessed quickly.
However, hard disk backups are
close-tolerance mechanical devices and may be more easily damaged than tapes, especially while being transported.
In the mid-2000s, several drive manufacturers began to produce portable drives employing
ramp loading and accelerometer technology (sometimes termed a "shock sensor"),
and by 2010 the industry average in drop tests for drives with that technology showed drives remaining intact and working after a 36-inch non-operating drop onto industrial carpeting.
Some manufacturers also offer 'ruggedized' portable hard drives, which include a shock-absorbing case around the hard disk, and
claim a range of higher drop specifications.
[
] Over a period of years the stability of hard disk backups is shorter than that of tape backups.
External hard disks can be connected via local interfaces like
SCSI
Small Computer System Interface (SCSI, ) is a set of standards for physically connecting and transferring data between computers and peripheral devices. The SCSI standards define commands, protocols, electrical, optical and logical interface ...
,
USB,
FireWire
IEEE 1394 is an interface standard for a serial bus for high-speed communications and isochronous real-time data transfer. It was developed in the late 1980s and early 1990s by Apple in cooperation with a number of companies, primarily Sony an ...
, or
eSATA, or via longer-distance technologies like
Ethernet
Ethernet () is a family of wired computer networking technologies commonly used in local area networks (LAN), metropolitan area networks (MAN) and wide area networks (WAN). It was commercially introduced in 1980 and first standardized in ...
,
iSCSI, or
Fibre Channel
Fibre Channel (FC) is a high-speed data transfer protocol providing in-order, lossless delivery of raw block data. Fibre Channel is primarily used to connect computer data storage to servers in storage area networks (SAN) in commercial data cen ...
. Some disk-based backup systems, via
Virtual Tape Libraries or otherwise, support data deduplication, which can reduce the amount of disk storage capacity consumed by daily and weekly backup data.
Optical storage
Optical storage uses lasers to store and retrieve data. Recordable
CDs, DVDs, and
Blu-ray Disc
The Blu-ray Disc (BD), often known simply as Blu-ray, is a Digital media, digital optical disc data storage format. It was invented and developed in 2005 and released on June 20, 2006 worldwide. It is designed to supersede the DVD format, and c ...
s are commonly used with personal computers and are generally cheap. In the past, the capacities and speeds of these discs have been lower than hard disks or tapes, although advances in optical media are slowly shrinking that gap.
Potential future data losses caused by gradual
media degradation can be
predicted
A prediction (Latin ''præ-'', "before," and ''dicere'', "to say"), or forecast, is a statement about a future event or data. They are often, but not always, based upon experience or knowledge. There is no universal agreement about the exact ...
by
measuring the rate of correctable minor data errors, of which consecutively too many increase the risk of uncorrectable sectors. Support for error scanning varies among
optical drive vendors.
Many optical disc formats are
WORM type, which makes them useful for archival purposes since the data cannot be changed. Moreover, optical discs are
not vulnerable to
head crash
A head crash is a hard-disk failure that occurs when a read–write head of a hard disk drive makes contact with its rotating platter, slashing its surface and permanently damaging its magnetic media. It is most often caused by a sudden seve ...
es, magnetism, imminent water ingress or
power surges, and a fault of the drive typically just halts the spinning.
Optical media is
modular; the storage controller is not tied to media itself like with hard drives or flash storage (→
flash memory controller), allowing it to be removed and accessed through a different drive. However, recordable media may degrade earlier under long-term exposure to light.
Some optical storage systems allow for cataloged data backups without human contact with the discs, allowing for longer data integrity. A French study in 2008 indicated that the lifespan of typically-sold
CD-Rs was 2–10 years,
but one manufacturer later estimated the longevity of its CD-Rs with a gold-sputtered layer to be as high as 100 years. Sony's
proprietary Optical Disc Archive
Optical Disc Archive is a proprietary storage technology that was introduced by Sony. It uses removable cartridges, where each cartridge holds 11 optical discs. Each of the internal optical discs is similar to, but not compatible with, a Blu ...
can in 2016 reach a read rate of 250MB/s.
Solid-state drive
Solid-state drives (SSDs) use
integrated circuit
An integrated circuit or monolithic integrated circuit (also referred to as an IC, a chip, or a microchip) is a set of electronic circuits on one small flat piece (or "chip") of semiconductor material, usually silicon. Large numbers of tiny ...
assemblies to store data.
Flash memory
Flash memory is an electronic non-volatile computer memory storage medium that can be electrically erased and reprogrammed. The two main types of flash memory, NOR flash and NAND flash, are named for the NOR and NAND logic gates. Both us ...
,
thumb drives,
USB flash drives,
CompactFlash,
SmartMedia,
Memory Sticks, and
Secure Digital card devices are relatively expensive for their low capacity, but convenient for backing up relatively low data volumes. A solid-state drive does not contain any movable parts, making it less susceptible to physical damage, and can have huge throughput of around 500 Mbit/s up to 6 Gbit/s. Available SSDs have become more capacious and cheaper.
[ Flash memory backups are stable for fewer years than hard disk backups.]
Remote backup service
Remote backup services or cloud backups involve service providers storing data offsite. This has been used to protect against events such as fires, floods, or earthquakes which could destroy locally stored backups. Cloud-based backup (through services like or similar to Google Drive, and Microsoft OneDrive) provides a layer of data protection. However, the users must trust the provider to maintain the privacy and integrity of their data, with confidentiality enhanced by the use of encryption. Because speed and availability are limited by a user's online connection, users with large amounts of data may need to use cloud seeding and large-scale recovery.
Management
Various methods can be used to manage backup media, striking a balance between accessibility, security and cost. These media management methods are not mutually exclusive and are frequently combined to meet the user's needs. Using on-line disks for staging data before it is sent to a near-line tape library is a common example.
Online
Online backup storage is typically the most accessible type of data storage, and can begin a restore in milliseconds. An internal hard disk or a disk array (maybe connected to SAN) is an example of an online backup. This type of storage is convenient and speedy, but is vulnerable to being deleted or overwritten, either by accident, by malevolent action, or in the wake of a data-deleting virus payload.
Near-line
Nearline storage is typically less accessible and less expensive than online storage, but still useful for backup data storage. A mechanical device is usually used to move media units from storage into a drive where the data can be read or written. Generally it has safety properties similar to on-line storage. An example is a tape library with restore times ranging from seconds to a few minutes.
Off-line
Off-line storage requires some direct action to provide access to the storage media: for example, inserting a tape into a tape drive or plugging in a cable. Because the data is not accessible via any computer except during limited periods in which they are written or read back, they are largely immune to on-line backup failure modes. Access time varies depending on whether the media are on-site or off-site.
Off-site data protection
Backup media may be sent to an off-site vault to protect against a disaster or other site-specific problem. The vault can be as simple as a system administrator's home office or as sophisticated as a disaster-hardened, temperature-controlled, high-security bunker with facilities for backup media storage. A data replica can be off-site but also on-line (e.g., an off-site RAID mirror). Such a replica has fairly limited value as a backup.
Backup site
A backup site or disaster recovery center is used to store data that can enable computer systems and networks to be restored and properly configure in the event of a disaster. Some organisations have their own data recovery centres, while others contract this out to a third-party. Due to high costs, backing up is rarely considered the preferred method of moving data to a DR site. A more typical way would be remote disk mirroring, which keeps the DR data as up to date as possible.
Selection and extraction of data
A backup operation starts with selecting and extracting coherent units of data. Most data on modern computer systems is stored in discrete units, known as files
File or filing may refer to:
Mechanical tools and processes
* File (tool), a tool used to ''remove'' fine amounts of material from a workpiece
**Filing (metalworking), a material removal process in manufacturing
** Nail file, a tool used to gent ...
. These files are organized into filesystems. Deciding what to back up at any given time involves tradeoffs. By backing up too much redundant data, the information repository will fill up too quickly. Backing up an insufficient amount of data can eventually lead to the loss of critical information.
Files
* Copying files : Making copies of files is the simplest and most common way to perform a backup. A means to perform this basic function is included in all backup software and all operating systems.
*Partial file copying: A backup may include only the blocks or bytes within a file that have changed in a given period of time. This can substantially reduce needed storage space, but requires higher sophistication to reconstruct files in a restore situation. Some implementations require integration with the source file system.
*Deleted files : To prevent the unintentional restoration of files that have been intentionally deleted, a record of the deletion must be kept.
*Versioning of files : Most backup applications, other than those that do only full only/System imaging, also back up files that have been modified since the last backup. "That way, you can retrieve many different versions of a given file, and if you delete it on your hard disk, you can still find it in your nformation repositoryarchive."
Filesystems
*Filesystem dump: A copy of the whole filesystem in block-level can be made. This is also known as a "raw partition backup" and is related to disk imaging. The process usually involves unmounting the filesystem and running a program like dd (Unix). Because the disk is read sequentially and with large buffers, this type of backup can be faster than reading every file normally, especially when the filesystem contains many small files, is highly fragmented, or is nearly full. But because this method also reads the free disk blocks that contain no useful data, this method can also be slower than conventional reading, especially when the filesystem is nearly empty. Some filesystems, such as XFS, provide a "dump" utility that reads the disk sequentially for high performance while skipping unused sections. The corresponding restore utility can selectively restore individual files or the entire volume at the operator's choice.
*Identification of changes: Some filesystems have an archive bit for each file that says it was recently changed. Some backup software looks at the date of the file and compares it with the last backup to determine whether the file was changed.
* Versioning file system : A versioning filesystem tracks all changes to a file. The NILFS versioning filesystem for Linux is an example.
Live data
Files that are actively being updated present a challenge to back up. One way to back up live data is to temporarily quiesce them (e.g., close all files), take a "snapshot", and then resume live operations. At this point the snapshot can be backed up through normal methods. A snapshot is an instantaneous function of some filesystems that presents a copy of the filesystem as if it were frozen at a specific point in time, often by a copy-on-write mechanism. Snapshotting a file while it is being changed results in a corrupted file that is unusable. This is also the case across interrelated files, as may be found in a conventional database or in applications such as Microsoft Exchange Server. The term fuzzy backup can be used to describe a backup of live data that looks like it ran correctly, but does not represent the state of the data at a single point in time.
Backup options for data files that cannot be or are not quiesced include:
*Open file backup: Many backup software applications undertake to back up open files in an internally consistent state. Some applications simply check whether open files are in use and try again later. Other applications exclude open files that are updated very frequently. Some low-availability interactive applications can be backed up via natural/induced pausing.
*Interrelated database files backup: Some interrelated database file systems offer a means to generate a "hot backup" of the database while it is online and usable. This may include a snapshot of the data files plus a snapshotted log of changes made while the backup is running. Upon a restore, the changes in the log files are applied to bring the copy of the database up to the point in time at which the initial backup ended. Other low-availability interactive applications can be backed up via coordinated snapshots. However, genuinely-high-availability interactive applications can be only be backed up via Continuous Data Protection.
Metadata
Not all information stored on the computer is stored in files. Accurately recovering a complete system from scratch requires keeping track of this non-file data too.
*System description: System specifications are needed to procure an exact replacement after a disaster.
* Boot sector : The boot sector can sometimes be recreated more easily than saving it. It usually isn't a normal file and the system won't boot without it.
* Partition layout: The layout of the original disk, as well as partition tables and filesystem settings, is needed to properly recreate the original system.
*File metadata
Metadata is "data that provides information about other data", but not the content of the data, such as the text of a message or the image itself. There are many distinct types of metadata, including:
* Descriptive metadata – the descriptive ...
: Each file's permissions, owner, group, ACLs, and any other metadata need to be backed up for a restore to properly recreate the original environment.
*System metadata: Different operating systems have different ways of storing configuration information. Microsoft Windows
Windows is a group of several proprietary graphical operating system families developed and marketed by Microsoft. Each family caters to a certain sector of the computing industry. For example, Windows NT for consumers, Windows Server for serv ...
keeps a registry of system information that is more difficult to restore than a typical file.
Manipulation of data and dataset optimization
It is frequently useful or required to manipulate the data being backed up to optimize the backup process. These manipulations can improve backup speed, restore speed, data security, media usage and/or reduced bandwidth requirements.
Automated data grooming
Out-of-date data can be automatically deleted, but for personal backup applications—as opposed to enterprise client-server backup applications where automated data "grooming" can be customized—the deletion[Some backup applications—notably rsync and CrashPlan—term removing backup data "pruning" instead of "grooming".] can at most be globally delayed or be disabled.
Compression
Various schemes can be employed to shrink
Shrink may refer to:
Common meanings
*Miniaturization
*Shrink, a slang term for:
** a psychiatrist
** a psychoanalyst
** a psychologist
Arts, entertainment, and media
* ''Shrink'' (album), album by German indie rock/electronica group The Notwist
...
the size of the source data to be stored so that it uses less storage space. Compression is frequently a built-in feature of tape drive hardware.
Deduplication
Redundancy due to backing up similarly configured workstations can be reduced, thus storing just one copy. This technique can be applied at the file or raw block level. This potentially large reduction is called deduplication
The term deduplication refers generally to eliminating duplicate or redundant information.
*Data deduplication, in computer storage, refers to the elimination of redundant data
*Record linkage
Record linkage (also known as data matching, data l ...
. It can occur on a server before any data moves to backup media, sometimes referred to as source/client side deduplication. This approach also reduces bandwidth required to send backup data to its target media. The process can also occur at the target storage device, sometimes referred to as inline or back-end deduplication.
Duplication
Sometimes backups are duplicated to a second set of storage media. This can be done to rearrange the archive files to optimize restore speed, or to have a second copy at a different location or on a different storage medium—as in the disk-to-disk-to-tape capability of Enterprise client-server backup.
Encryption
High-capacity removable storage media such as backup tapes present a data security risk if they are lost or stolen. Encrypting the data on these media can mitigate this problem, however encryption is a CPU intensive process that can slow down backup speeds, and the security of the encrypted backups is only as effective as the security of the key management policy.
Multiplexing
When there are many more computers to be backed up than there are destination storage devices, the ability to use a single storage device with several simultaneous backups can be useful. However cramming the scheduled backup window via "multiplexed backup" is only used for tape destinations.
Refactoring
The process of rearranging the sets of backups in an archive file is known as refactoring. For example, if a backup system uses a single tape each day to store the incremental backups for all the protected computers, restoring one of the computers could require many tapes. Refactoring could be used to consolidate all the backups for a single computer onto a single tape, creating a "synthetic full backup". This is especially useful for backup systems that do incrementals forever style backups.
Staging
Sometimes backups are copied to a staging disk before being copied to tape. This process is sometimes referred to as D2D2T, an acronym for Disk-to-disk-to-tape
Disk staging is using disks as an additional, temporary stage of backup process before finally storing backup to tape. Backups stay on disk typically for a day or a week, before being copied to tape in a background process and deleted afterwards. ...
. It can be useful if there is a problem matching the speed of the final destination device with the source device, as is frequently faced in network-based backup systems. It can also serve as a centralized location for applying other data manipulation techniques.
Objectives
* Recovery point objective (RPO) : The point in time that the restarted infrastructure will reflect, expressed as "the maximum targeted period in which data (transactions) might be lost from an IT service due to a major incident". Essentially, this is the roll-back that will be experienced as a result of the recovery. The most desirable RPO would be the point just prior to the data loss event. Making a more recent recovery point achievable requires increasing the frequency of synchronization
Synchronization is the coordination of events to operate a system in unison. For example, the conductor of an orchestra keeps the orchestra synchronized or ''in time''. Systems that operate with all parts in synchrony are said to be synchronou ...
between the source data and the backup repository.
*Recovery time objective (RTO) : The amount of time elapsed between disaster and restoration of business functions.
* Data security : In addition to preserving access to data for its owners, data must be restricted from unauthorized access. Backups must be performed in a manner that does not compromise the original owner's undertaking. This can be achieved with data encryption and proper media handling policies.
* Data retention period : Regulations and policy can lead to situations where backups are expected to be retained for a particular period, but not any further. Retaining backups after this period can lead to unwanted liability and sub-optimal use of storage media.
* Checksum or hash function validation : Applications that back up to tape archive files need this option to verify that the data was accurately copied.
* Backup process monitoring : Enterprise client-server backup applications need a user interface that allows administrators to monitor the backup process, and proves compliance to regulatory bodies outside the organization; for example, an insurance company in the USA might be required under HIPAA
The Health Insurance Portability and Accountability Act of 1996 (HIPAA or the Kennedy– Kassebaum Act) is a United States Act of Congress enacted by the 104th United States Congress and signed into law by President Bill Clinton on August 21, 19 ...
to demonstrate that its client data meet records retention requirements.HIPAA Advisory
. Retrieved 10 March 2007
*
User-initiated backups and restores : To avoid or recover from ''minor'' disasters, such as inadvertently deleting or overwriting the "good" versions of one or more files, the computer user—rather than an administrator—may initiate backups and restores (from not necessarily the most-recent backup) of files or folders.
See also
;About backup
* Backup software & services
**
List of backup software
**
List of online backup services
*
Glossary of backup terms
*
Virtual backup appliance
;Related topics
*
Data consistency
Data consistency refers to whether the same data kept at different places do or do not match.
Point-in-time consistency
Point-in-time consistency is an important property of backup files and a critical objective of software that creates backups. ...
*
Data degradation
Data degradation is the gradual corruption of computer data due to an accumulation of non-critical failures in a data storage device. The phenomenon is also known as data decay, data rot or bit rot.
Example
Below are several digital images ill ...
*
Data portability
*
Data proliferation Data proliferation refers to the prodigious amount of data, structured and unstructured, that businesses and governments continue to generate at an unprecedented rate and the usability problems that result from attempting to store and manage that da ...
*
Database dump
A database dump contains a record of the table structure and/or the data from a database and is usually in the form of a list of SQL statements ("SQL dump"). A database dump is most often used for backing up a database so that its contents can be ...
*
Digital preservation
*
Disaster recovery and business continuity auditing
*
World Backup Day
World Backup day is a commemorative date celebrated annually by the backup industry and tech industry all over the world. The World Backup Day highlights the importance of protecting data and keeping systems and computers secure.
World Backup ...
Notes
References
External links
*
*{{Commons category-inline
Computer data
*
Data management
Data security
Records management