HOME

TheInfoList



OR:

Activity recognition aims to recognize the actions and goals of one or more agents from a series of observations on the agents' actions and the environmental conditions. Since the 1980s, this research field has captured the attention of several computer science communities due to its strength in providing personalized support for many different applications and its connection to many different fields of study such as medicine, human-computer interaction, or sociology. Due to its multifaceted nature, different fields may refer to activity recognition as plan recognition, goal recognition, intent recognition, behavior recognition, location estimation and location-based services.


Types


Sensor-based, single-user activity recognition

Sensor-based activity recognition integrates the emerging area of sensor networks with novel data mining and machine learning techniques to model a wide range of human activities. Mobile devices (e.g. smart phones) provide sufficient sensor data and calculation power to enable physical activity recognition to provide an estimation of the energy consumption during everyday life. Sensor-based activity recognition researchers believe that by empowering ubiquitous computers and sensors to monitor the behavior of agents (under consent), these computers will be better suited to act on our behalf. Visual sensors that incorporate color and depth information, such as the kinect, allow more accurate automatic action recognition and fuse many emerging applications such as interactive education and smart environments. Multiple views of visual sensor enables the development of machine learning for automatic view invariant action recognition. More advanced sensors used in 3D
motion capture Motion capture (sometimes referred as mo-cap or mocap, for short) is the process of recording the movement of objects or people. It is used in military, entertainment, sports, medical applications, and for validation of computer vision and robo ...
systems allow highly accurate automatic recognition, in the expenses of more complicated hardware system setup.


Levels of sensor-based activity recognition

Sensor-based activity recognition is a challenging task due to the inherent noisy nature of the input. Thus,
statistical modeling A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population). A statistical model represents, often in considerably idealized form, ...
has been the main thrust in this direction in layers, where the recognition at several intermediate levels is conducted and connected. At the lowest level where the sensor data are collected, statistical learning concerns how to find the detailed locations of agents from the received signal data. At an intermediate level,
statistical inference Statistical inference is the process of using data analysis to infer properties of an underlying distribution of probability.Upton, G., Cook, I. (2008) ''Oxford Dictionary of Statistics'', OUP. . Inferential statistical analysis infers properti ...
may be concerned about how to recognize individuals' activities from the inferred location sequences and environmental conditions at the lower levels. Furthermore, at the highest level a major concern is to find out the overall goal or subgoals of an agent from the activity sequences through a mixture of logical and statistical reasoning.


Sensor-based, multi-user activity recognition

Recognizing activities for multiple users using on-body sensors first appeared in the work by ORL using active badge systems in the early 1990s. Other sensor technology such as acceleration sensors were used for identifying group activity patterns during office scenarios. Activities of Multiple Users in intelligent environments are addressed in Gu ''et al''. In this work, they investigate the fundamental problem of recognizing activities for multiple users from sensor readings in a home environment, and propose a novel pattern mining approach to recognize both single-user and multi-user activities in a unified solution.


Sensor-based group activity recognition

Recognition of group activities is fundamentally different from single, or multi-user activity recognition in that the goal is to recognize the behavior of the group as an entity, rather than the activities of the individual members within it. Group behavior is emergent in nature, meaning that the properties of the behavior of the group are fundamentally different than the properties of the behavior of the individuals within it, or any sum of that behavior. The main challenges are in modeling the behavior of the individual group members, as well as the roles of the individual within the group dynamic and their relationship to emergent behavior of the group in parallel. Challenges which must still be addressed include quantification of the behavior and roles of individuals who join the group, integration of explicit models for role description into inference algorithms, and scalability evaluations for very large groups and crowds. Group activity recognition has applications for crowd management and response in emergency situations, as well as for
social networking A social network is a social structure made up of a set of social actors (such as individuals or organizations), sets of dyadic ties, and other social interactions between actors. The social network perspective provides a set of methods for an ...
and Quantified Self applications.


Approaches


Activity recognition through logic and reasoning

Logic-based approaches keep track of all logically consistent explanations of the observed actions. Thus, all possible and consistent plans or goals must be considered. Kautz provided a formal theory of plan recognition. He described plan recognition as a logical inference process of circumscription. All actions and plans are uniformly referred to as goals, and a recognizer's knowledge is represented by a set of first-order statements, called event hierarchy. Event hierarchy is encoded in first-order logic, which defines abstraction, decomposition and functional relationships between types of events. Kautz's general framework for plan recognition has an exponential time complexity in worst case, measured in the size of input hierarchy. Lesh and Etzioni went one step further and presented methods in scaling up goal recognition to scale up his work computationally. In contrast to Kautz's approach where the plan library is explicitly represented, Lesh and Etzioni's approach enables automatic plan-library construction from domain primitives. Furthermore, they introduced compact representations and efficient algorithms for goal recognition on large plan libraries. Inconsistent plans and goals are repeatedly pruned when new actions arrive. Besides, they also presented methods for adapting a goal recognizer to handle individual idiosyncratic behavior given a sample of an individual's recent behavior. Pollack et al. described a direct argumentation model that can know about the relative strength of several kinds of arguments for belief and intention description. A serious problem of logic-based approaches is their inability or inherent infeasibility to represent uncertainty. They offer no mechanism for preferring one consistent approach to another and incapable of deciding whether one particular plan is more likely than another, as long as both of them can be consistent enough to explain the actions observed. There is also a lack of learning ability associated with logic based methods. Another approach to logic-based activity recognition is to use stream reasoning based on
answer set programming Answer set programming (ASP) is a form of declarative programming oriented towards difficult (primarily NP-hard) search problems. It is based on the stable model (answer set) semantics of logic programming. In ASP, search problems are reduced ...
, and has been applied to recognising activities for health-related applications, which uses weak constraints to model a degree of ambiguity/uncertainty.


Activity recognition through probabilistic reasoning

Probability theory and statistical learning models are more recently applied in activity recognition to reason about actions, plans and goals under uncertainty. In the literature, there have been several approaches which explicitly represent uncertainty in reasoning about an agent's plans and goals. Using sensor data as input, Hodges and Pollack designed machine learning-based systems for identifying individuals as they perform routine daily activities such as making coffee. Intel Research (Seattle) Lab and
University of Washington The University of Washington (UW, simply Washington, or informally U-Dub) is a public research university in Seattle, Washington. Founded in 1861, Washington is one of the oldest universities on the West Coast; it was established in Seattle ...
at Seattle have done some important works on using sensors to detect human plans. Some of these works infer user transportation modes from readings of radio-frequency identifiers (RFID) and global positioning systems (GPS). The use of temporal probabilistic models has been shown to perform well in activity recognition and generally outperform non-temporal models. Generative models such as the Hidden Markov Model (HMM) and the more generally formulated Dynamic Bayesian Networks (DBN) are popular choices in modelling activities from sensor data.TLM van Kasteren, Gwenn Englebienne, Ben Kröse
Hierarchical Activity Recognition Using Automatically Clustered Actions
, 2011, Ambient Intelligence, 82–91, Springer Berlin/Heidelberg
Discriminative models such as Conditional Random Fields (CRF) are also commonly applied and also give good performance in activity recognition. Generative and discriminative models both have their pros and cons and the ideal choice depends on their area of application. A dataset together with implementations of a number of popular models (HMM, CRF) for activity recognition can be foun
here
Conventional temporal probabilistic models such as the hidden Markov model (HMM) and conditional random fields (CRF) model directly model the correlations between the activities and the observed sensor data. In recent years, increasing evidence has supported the use of hierarchical models which take into account the rich hierarchical structure that exists in human behavioral data. Nuria Oliver, Ashutosh Garg, and Eric Horvitz
Layered representations for learning and inferring office activity from multiple sensory channels
Comput. Vis. Image Underst., 96(2):163–180, 2004.
The core idea here is that the model does not directly correlate the activities with the sensor data, but instead breaks the activity into sub-activities (sometimes referred to as actions) and models the underlying correlations accordingly. An example could be the activity of preparing a stir fry, which can be broken down into the subactivities or actions of cutting vegetables, frying the vegetables in a pan and serving it on a plate. Examples of such a hierarchical model are Layered Hidden Markov Models (LHMMs) and the hierarchical hidden Markov model (HHMM), which have been shown to significantly outperform its non-hierarchical counterpart in activity recognition.


Data mining based approach to activity recognition

Different from traditional machine learning approaches, an approach based on data mining has been recently proposed. In the work of Gu et al., the problem of activity recognition is formulated as a pattern-based classification problem. They proposed a data mining approach based on discriminative patterns which describe significant changes between any two activity classes of data to recognize sequential, interleaved and concurrent activities in a unified solution. Gilbert ''et al.'' use 2D corners in both space and time. These are grouped spatially and temporally using a hierarchical process, with an increasing search area. At each stage of the hierarchy, the most distinctive and descriptive features are learned efficiently through data mining (Apriori rule).


GPS-based activity recognition

Location-based activity recognition can also rely on
GPS The Global Positioning System (GPS), originally Navstar GPS, is a satellite-based radionavigation system owned by the United States government and operated by the United States Space Force. It is one of the global navigation satellite sy ...
data to recognize activities.


Sensor usage


Vision-based activity recognition

It is a very important and challenging problem to track and understand the behavior of agents through videos taken by various cameras. The primary technique employed is
Computer Vision Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate tasks that the human ...
. Vision-based activity recognition has found many applications such as human-computer interaction, user interface design, robot learning, and surveillance, among others. Scientific conferences where vision based activity recognition work often appears are ICCV and
CVPR The Conference on Computer Vision and Pattern Recognition (CVPR) is an annual conference on computer vision and pattern recognition, which is regarded as one of the most important conferences in its field. According to Google Scholar Metrics (2022 ...
. In vision-based activity recognition, a great deal of work has been done. Researchers have attempted a number of methods such as
optical flow Optical flow or optic flow is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and a scene. Optical flow can also be defined as the distribution of apparent veloci ...
, Kalman filtering,
Hidden Markov model A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process — call it X — with unobservable ("''hidden''") states. As part of the definition, HMM requires that there be an ob ...
s, etc., under different modalities such as single camera, stereo, and infrared. In addition, researchers have considered multiple aspects on this topic, including single pedestrian tracking, group tracking, and detecting dropped objects. Recently some researchers have used RGBD cameras like Microsoft Kinect to detect human activities. Depth cameras add extra dimension i.e. depth which normal 2d camera fails to provide. Sensory information from these depth cameras have been used to generate real-time skeleton model of humans with different body positions. This skeleton information provides meaningful information that researchers have used to model human activities which are trained and later used to recognize unknown activities. With the recent emergency of deep learning, RGB video based activity recognition has seen rapid development. It uses videos captured by RGB cameras as input and perform several tasks, including: video classification, detection of activity start and end in videos, and spatial-temporal localization of activity and the people performing the activity. Despite remarkable progress of vision-based activity recognition, its usage for most actual visual surveillance applications remains a distant aspiration. Conversely, the human brain seems to have perfected the ability to recognize human actions. This capability relies not only on acquired knowledge, but also on the aptitude of extracting information relevant to a given context and logical reasoning. Based on this observation, it has been proposed to enhance vision-based activity recognition systems by integrating
commonsense reasoning In artificial intelligence (AI), commonsense reasoning is a human-like ability to make presumptions about the type and essence of ordinary situations humans encounter every day. These assumptions include judgments about the nature of physical objec ...
and, contextual and
commonsense knowledge In artificial intelligence research, commonsense knowledge consists of facts about the everyday world, such as "Lemons are sour", that all humans are expected to know. It is currently an unsolved problem in Artificial General Intelligence. The f ...
.


Levels of vision-based activity recognition

In vision-based activity recognition, the computational process is often divided into four steps, namely human detection, human tracking, human activity recognition and then a high-level activity evaluation.


Fine-grained action localization

In
computer vision Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate tasks that the human ...
-based activity recognition, fine-grained action localization typically provides per-image segmentation masks delineating the human object and its action category (e.g., ''Segment-Tube'' .). Techniques such as dynamic
Markov Networks In the domain of physics and probability, a Markov random field (MRF), Markov network or undirected graphical model is a set of random variables having a Markov property described by an undirected graph. In other words, a random field is said ...
,
CNN CNN (Cable News Network) is a multinational cable news channel headquartered in Atlanta, Georgia, U.S. Founded in 1980 by American media proprietor Ted Turner and Reese Schonfeld as a 24-hour cable news channel, and presently owned by t ...
and LSTM are often employed to exploit the semantic correlations between consecutive video frames.


Automatic gait recognition

One way to identify specific people is by how they walk. Gait-recognition software can be used to record a person's gait or gait feature profile in a database for the purpose of recognizing that person later, even if they are wearing a disguise.


Wi-Fi-based activity recognition

When activity recognition is performed indoors and in cities using the widely available Wi-Fi signals and
802.11 IEEE 802.11 is part of the IEEE 802 set of local area network (LAN) technical standards, and specifies the set of media access control (MAC) and physical layer (PHY) protocols for implementing wireless local area network (WLAN) computer commu ...
access points, there is much noise and uncertainty. These uncertainties can be modeled using a dynamic
Bayesian network A Bayesian network (also known as a Bayes network, Bayes net, belief network, or decision network) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Ba ...
model. In a multiple goal model that can reason about user's interleaving goals, a deterministic state transition model is applied. Another possible method models the concurrent and interleaving activities in a probabilistic approach. A user action discovery model could segment Wi-Fi signals to produce possible actions.


Basic models of Wi-Fi recognition

One of the primary thought of Wi-Fi activity recognition is that when the signal goes through the human body during transmission; which causes reflection, diffraction, and scattering. Researchers can get information from these signals to analyze the activity of the human body.


= Static transmission model

= As shown in, when wireless signals are transmitted indoors, obstacles such as walls, the ground, and the human body cause various effects such as reflection, scattering, diffraction, and diffraction. Therefore, receiving end receives multiple signals from different paths at the same time, because surfaces reflect the signal during the transmission, which is known as multipath effect. The static model is based on these two kinds of signals: the direct signal and the reflected signal. Because there is no obstacle in the direct path, direct signal transmission can be modeled by
Friis transmission equation The Friis transmission formula is used in telecommunications engineering, equating the power at the terminals of a receive antenna as the product of power density of the incident wave and the effective aperture of the receiving antenna under i ...
: : P_r=\frac : P_t is the power fed into the transmitting antenna input terminals; : P_r is the power available at receiving antenna output terminals; : d is the distance between antennas; : G_t is transmitting antenna gain; : G_r is receiving antenna gain; : \lambda is the wavelength of the radio frequency If we consider the reflected signal, the new equation is: : P_r=\frac : h is the distance between reflection points and direct path. When human shows up, we have a new transmission path. Therefore, the final equation is: : P_r=\frac \Delta is the approximate difference of the path caused by human body.


= Dynamic transmission model

= In this model, we consider the human motion, which causes the signal transmission path to change continuously. We can use Doppler Shift to describe this effect, which is related to the motion speed. : \Delta f=\fracf By calculating the Doppler Shift of the receiving signal, we can figure out the pattern of the movement, thereby further identifying human activity. For example, in, the Doppler shift is used as a fingerprint to achieve high-precision identification for nine different movement patterns.


= Fresnel zone

= The Fresnel zone was initially used to study the interference and diffraction of the light, which is later used to construct the wireless signal transmission model. Fresnel zone is a series of elliptical intervals whose foci are the positions of the sender and receiver. When a person is moving across different Fresnel zones, the signal path formed by the reflection of the human body changes, and if people move vertically through Fresnel zones, the change of signal will be periodical. In the paper,D. Wu, D. Zhang, C. Xu, Y. Wang, and H. Wang.�
Wider: Walking direction estimation using wireless signals
”.Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, New York, USA, 2016:351–362.
and,H. Wang, D. Zhang, J. Ma, Y. Wang, Y. Wang, D. Wu, T. Gu, and B. Xie, �
Human respiration detection with commodity wifi devices: Do user location and body orientation matter?
��. Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, New York, USA, 2016:25–36.
they have applied the Fresnel model to the activity recognition task and got a more accurate result.


= Modeling of the human body

= In some tasks, we should consider modeling the human body accurately to achieve better results. For example, described the human body as concentric cylinders for breath detection. The outside of the cylinder denotes the rib cage when people inhale, and the inside denotes that when people exhale. So the difference between the radius of that two cylinders represents the moving distance during breathing. The change of the signal phases can be expressed in the following equation: : \theta=2\pi\frac : \theta is the change of the signal phases; : \lambda is the wavelength of the radio frequency; : \Delta d is moving distance of rib cage;


Datasets

There are some popular datasets that are used for benchmarking activity recognition or action recognition algorithms. * UCF-101: It consists of 101 human action classes, over 13k clips and 27 hours of video data. Action classes include applying makeup, playing dhol, cricket shot, shaving beard, etc. * HMDB51: This is a collection of realistic videos from various sources, including movies and web videos. The dataset is composed of 6,849 video clips from 51 action categories (such as “jump”, “kiss” and “laugh”), with each category containing at least 101 clips. * Kinetics: This is a significantly larger dataset than the previous ones. It contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. This dataset was created by DeepMind.


Applications

By automatically monitoring human activities, home-based rehabilitation can be provided for people suffering from traumatic brain injuries. One can find applications ranging from security-related applications and logistics support to location-based services. Activity recognition systems have been developed for
wildlife observation Wildlife observation is the practice of noting the occurrence or abundance of animal species at a specific location and time, either for research purposes or recreation. Common examples of this type of activity are bird watching and whale watching ...
and energy conservation in buildings.Nguyen, Tuan Anh, and Marco Aiello.
Energy intelligent buildings based on user activity: A survey
" Energy and buildings 56 (2013): 244–257.


See also

*
AI effect :''For the magnitude of effect of a pesticide, see Pesticide application. Of change in farming practices, see Agricultural intensification.'' The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by argui ...
*
Applications of artificial intelligence Artificial intelligence (AI) has been used in applications to alleviate certain problems throughout industry and academia. AI, like electricity or computers, is a general purpose technology that has a multitude of applications. It has been used ...
*
Conditional random field Conditional random fields (CRFs) are a class of statistical modeling methods often applied in pattern recognition and machine learning and used for structured prediction. Whereas a classifier predicts a label for a single sample without consid ...
* Gesture recognition *
Hidden Markov model A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process — call it X — with unobservable ("''hidden''") states. As part of the definition, HMM requires that there be an ob ...
*
Motion analysis Motion analysis is used in computer vision, image processing, high-speed photography and machine vision that studies methods and applications in which two or more consecutive images from an image sequences, e.g., produced by a video camera or high- ...
* Naive Bayes classifier * Support vector machines *
Object co-segmentation In computer vision, object co-segmentation is a special case of image segmentation, which is defined as jointly segmenting semantically similar objects in multiple images or video frames. Challenges It is often challenging to extract segmentat ...
*
Outline of artificial intelligence The following outline is provided as an overview of and topical guide to artificial intelligence: Artificial intelligence (AI) – intelligence exhibited by machines or software. It is also the name of the scientific field which studies how to ...
* Video content analysis


References

{{DEFAULTSORT:Activity Recognition Human–computer interaction Applications of artificial intelligence Applied machine learning Motion in computer vision