1. Introduction
2. Materials and Methods
2.1. Dataset
2.2. Human Activity Representation
2.2.1. Motion Profile
 In order to analyze the depthmap video sequence, we utilize a sliding window of size W frames and overlap of size O frames between any two consecutive positions of the sliding window. Moreover, for a given position of the sliding window, the frames in the window are numbered sequentially between 1 and W. For each window position, we build a motion profile for the human activities incorporated within the frames of the window as follows:
 1.1
 We create a bodyattached coordinate system that is centered at the hip center ($hc$) joint. Then, we recalculate the positions of all of the other skeletal joints with respect to the $hc$ joint. Figure 3 illustrates the constructed bodyattached coordinate system.
 1.2
 Using the recalculated joint positions, we define the three anatomical planes, i.e., the TP, CP and SP, such that the three planes intersect at the $hc$ joint (see Figure 3). Each plane is defined using three noncollinear skeletal joint positions as described below:$$\begin{array}{c}\hfill TP\equiv <hc,\tilde{lhp},\tilde{rhp}>.\end{array}$$$$\begin{array}{c}\hfill CP\equiv <hc,lsh,rsh>.\end{array}$$$$\begin{array}{c}\hfill SP\equiv <hc,sc,sp>.\end{array}$$
 1.3
 We compute the displacement vectors for a subset of the skeletal joints, denoted as ${s}_{dv}$, that are related to the fallevent. In this study, the subset ${s}_{dv}$ is composed of the following joints: sp, hd, rhd, lhd, rft and lft. The displacement vector of each skeletal joint $X\in {s}_{dv}$ is computed with respect to the first frame in the sliding window as follows:$$\begin{array}{c}\hfill {\mathbf{DV}}_{\mathbf{X}(k)}=[(\mathbf{X}(k)\mathbf{X}(1)],\end{array}$$$$\begin{array}{c}\hfill SgnDist({\mathbf{DV}}_{\mathbf{X}(k)},\mathbf{Y})=\left(\frac{(\mathbf{Y}(2)\mathbf{Y}(1))\times (\mathbf{Y}(3)\mathbf{Y}(1))}{\parallel (\mathbf{Y}(2)\mathbf{Y}(1))\times (\mathbf{Y}(3)\mathbf{Y}(1))\parallel}\right)\xb7{\mathbf{DV}}_{\mathbf{X}(k)},\end{array}$$
 1.4
 The motion profile of each video frame in the current window position is defined as a vector, which consists of the calculated displacement vectors along with their associated signed distances for the skeletal joints in the set ${s}_{dv}$.
 Then, we move the sliding window to the next position and repeat the procedure in the first step.
2.2.2. Pose Profile
2.3. Fall Detection from PartiallyObserved DepthMap Video Sequences
3. Experimental Results and Discussion
3.1. Evaluation on the FullyObserved Video Sequences Scenario
3.2. Evaluation on the Single Unobserved Video Subsequence with Random Length Scenarios
3.2.1. Evaluation Results When the Temporal Gap Is at the Beginning of the Video Sequences
3.2.2. Evaluation Results When the Temporal Gap Is at the End of the Video Sequences
3.2.3. Evaluation Results When the Temporal Gap Is at the Middle of the Video Sequences
3.3. Evaluation of the Two Unobserved Video Subsequences with Random Lengths Scenarios
3.3.1. Evaluation Results When the Two Temporal Gaps Are at the Beginning and the End of the Video Sequences
3.3.2. Evaluation Results When the Two Temporal Gaps Are between the Beginning and the End of the Video Sequences
4. Conclusions
Author Contributions
Conflicts of Interest
References
 Costello, D.; Carone, G. Can europe afford to grow old. Int. Monet. Fund Financ. Dev. Mag. 2006, 43, 28. [Google Scholar]
 United States Census Bureau, Population Profile of the United States. Available online: www.census.gov (accessed on 18 December 2016).
 Centers for Disease Control and Prevention, Web Based Injury Statistics Query and Reporting. Available online: http://www.cdc.gov/injury/wisqars/index.html (accessed on 21 December 2016).
 Murphy, S.L. Final Data for 1998 National Vital Statistics Reports; Technical Report; National Center for Health Statistics: Hyattsville, MD, USA, 2000.
 Hsieh, J.W.; Hsu, Y.T.; Liao, H.Y.M.; Chen, C.C. Videobased human movement analysis and its application to surveillance systems. IEEE Trans. Multimedia 2008, 10, 372–384. [Google Scholar] [CrossRef]
 Vellas, B.J.; Wayne, S.J.; Romero, L.J.; Baumgartner, R.N.; Garry, P.J. Fear of falling and restriction of mobility in elderly fallers. Age Ageing 1997, 26, 189–193. [Google Scholar] [CrossRef] [PubMed]
 Feng, W.; Liu, R.; Zhu, M. Fall detection for elderly person care in a visionbased home surveillance environment using a monocular camera. Signal Image Video Process. 2014, 8, 1129–1138. [Google Scholar] [CrossRef]
 Narayanan, M.; Lord, S.; Budge, M.; Celler, B.; Lovell, N. Falls Management: Detection and Prevention, using a Waistmounted Triaxial Accelerometer. In Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Lyon, France, 23–26 August 2007. [Google Scholar]
 Boyle, J.; Karunanithi, M. Simulated fall detection via accelerometers. In Proceedings of the International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–25 August 2008. [Google Scholar]
 Wang, C.C.; Chiang, C.Y.; Lin, P.Y.; Chou, Y.C.; Kuo, I.T.; Huang, C.N.; Chan, C.T. Development of a Fall Detecting System for the Elderly Residents. In Proceedings of the 2nd International Conference on Bioinformatics and Biomedical Engineering, Shanghai, China, 16–18 May 2008. [Google Scholar]
 Rougier, C.; Meunier, J.; StArnaud, A.; Rousseau, J. Robust Video Surveillance for Fall Detection Based on Human Shape Deformation. IEEE Trans. Circuits Syst. Video Technol. 2011, 21, 611–622. [Google Scholar] [CrossRef]
 Auvinet, E.; Multon, F.; SaintArnaud, A.; Rousseau, J.; Meunier, J. Fall Detection With Multiple Cameras: An OcclusionResistant Method Based on 3D Silhouette Vertical Distribution. IEEE Trans. Inf. Technol. Biomed. 2011, 15, 290–300. [Google Scholar] [CrossRef] [PubMed]
 Rougier, C.; Meunier, J.; StArnaud, A.; Rousseau, J. Monocular 3D Head Tracking to Detect Falls of Elderly People. In Proceedings of the 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, New York, NY, USA, 31 August–3 September 2006. [Google Scholar]
 Mubashir, M.; Shao, L.; Seed, L. A survey on fall detection: Principles and approaches. Neurocomputing 2013, 100, 144–152. [Google Scholar] [CrossRef]
 Yu, M.; Yu, Y.; Rhuma, A.; Naqvi, S.M.R.; Wang, L.; Chambers, J.A. An online one class support vector machinebased personspecific fall detection system for monitoring an elderly individual in a room environment. IEEE J. Biomed. Health Inform. 2013, 17, 1002–1014. [Google Scholar] [PubMed]
 Dai, J.; Bai, X.; Yang, Z.; Shen, Z.; Xuan, D. PerFallD: A pervasive fall detection system using mobile phones. In Proceedings of the 8th IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops), Mannheim, Germany, 29 March–2 April 2010; pp. 292–297. [Google Scholar]
 Enayati, M.; Banerjee, T.; Popescu, M.; Skubic, M.; Rantz, M. A novel webbased depth video rewind approach toward fall preventive interventions in hospitals. In Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Chicago, IL, USA, 26–30 August 2014; pp. 4511–4514. [Google Scholar]
 Dubois, A.; Charpillet, F. A gait analysis method based on a depth camera for fall prevention. In Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Chicago, IL, USA, 26–30 August 2014; pp. 4515–4518. [Google Scholar]
 Li, Y.; Berkowitz, L.; Noskin, G.; Mehrotra, S. Detection of patient’s bed statuses in 3D using a Microsoft Kinect. In Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Chicago, IL, USA, 26–30 August 2014; pp. 5900–5903. [Google Scholar]
 Zhang, C.; Tian, Y.; Capezuti, E. Privacy preserving automatic fall detection for elderly using RGBD cameras. In International Conference on Computers for Handicapped Persons; Springer: Berlin/Heidelberg, Germany, 2012; pp. 625–633. [Google Scholar]
 Huang, S.H.; Pan, Y.C. Learningbased Human Fall Detection using RGBD cameras. In Proceedings of the International Conference on Machine Vision Applications, Kyoto, Japan, 20–23 May 2013. [Google Scholar]
 Garrido, J.E.; Penichet, V.M.; Lozano, M.D.; Valls, J.A.F. Automatic Detection of Falls and Fainting. J. Univ. Comput. Sci. 2013, 19, 1105–1122. [Google Scholar]
 Gasparrini, S.; Cippitelli, E.; Spinsante, S.; Gambi, E. A DepthBased Fall Detection System Using a Kinect^{®} Sensor. Sensors 2014, 14, 2756–2775. [Google Scholar] [CrossRef] [PubMed]
 Rougier, C.; Auvinet, E.; Rousseau, J.; Mignotte, M.; Meunier, J. Fall detection from depth map video sequences. In International Conference on Smart Homes and Health Telematics; Springer: Berlin/Heidelberg, Germany, 2011; pp. 121–128. [Google Scholar]
 FloresBarranco, M.M.; IbarraMazano, M.A.; Cheng, I. Accidental Fall Detection Based on Skeleton Joint Correlation and Activity Boundary. In International Symposium on Visual Computing; Springer: Berlin/Heidelberg, Germany, 2015; pp. 489–498. [Google Scholar]
 Tong, L.; Song, Q.; Ge, Y.; Liu, M. HMMBased Human Fall Detection and Prediction Method Using TriAxial Accelerometer. IEEE Sens. J. 2013, 13, 1849–1856. [Google Scholar] [CrossRef]
 Alazrai, R.; Mowafi, Y.; Hamad, E. A fall prediction methodology for elderly based on a depth camera. In Proceedings of the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milano, Italy, 25–29 August 2015; pp. 4990–4993. [Google Scholar]
 Stone, E.E.; Skubic, M. Fall detection in homes of older adults using the Microsoft Kinect. IEEE J. Biomed. Health Inform. 2015, 19, 290–301. [Google Scholar] [CrossRef] [PubMed]
 Cao, Y.; Barrett, D.; Barbu, A.; Narayanaswamy, S.; Yu, H.; Michaux, A.; Lin, Y.; Dickinson, S.; Mark Siskind, J.; Wang, S. Recognize human activities from partially observed videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 2658–2665. [Google Scholar]
 Alazrai, R.; Zmily, A.; Mowafi, Y. Fall detection for elderly using anatomicalplanebased representation. In Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Chicago, IL, USA, 26–30 August 2014; pp. 5916–5919. [Google Scholar]
 Snell, R.S. Clinical Anatomy by Regions, 9th ed.; Lippincott Williams & Wilkins, Walters Kluwer: Philadelphia, PA, USA, 2011. [Google Scholar]
 Alazrai, R.; Mowafi, Y.; Lee, C.G. Anatomicalplanebased representation for humanhuman interactions analysis. Pattern Recognit. 2015, 48, 2346–2363. [Google Scholar] [CrossRef]
 Wu, T.F.; Lin, C.J.; Weng, R.C. Probability Estimates for Multiclass Classification by Pairwise Coupling. J. Mach. Learn. Res. 2004, 5, 975–1005. [Google Scholar]
 Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 1–27. [Google Scholar] [CrossRef]
 Marzahl, C.; Penndorf, P.; Bruder, I.; Staemmler, M. Unobtrusive Fall Detection Using 3D Images of a Gaming Console: Concept and First Results. In Ambient Assisted Living: 5. AALKongress 2012 Berlin, Germany, January 24–25, 2012; Wichert, R., Eberhardt, B., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 135–146. [Google Scholar]
 Planinc, R.; Kampel, M. Introducing the use of depth data for fall detection. Pers. Ubiquitous Comput. 2013, 17, 1063–1072. [Google Scholar] [CrossRef]
Activity  SubActivities 

Walking  Walking forward (i.e., walking towards the Kinect sensor) and walking backward (i.e., walking away from the Kinect sensor). 
Sitting  Stand still pose, bending down for sitting and sitting pose. 
Falling from standing  Stand still pose, falling from standing pose and fall pose. 
Falling from sitting  Sitting pose, falling from sitting pose and fall pose. 
Angle  Description  Mathematical Formulation 

${\theta}_{Lshank}$  The angle between the left shank and a translated transverse plane (${TP}_{1}$) that passes through the left ankle joint position, where the left shank is defined as a line in the space that passes through the left ankle and left knee joint positions. 
$$\begin{array}{c}\hfill {\theta}_{Lshank}=\mathrm{arcsin}\frac{{\overrightarrow{n}}_{{TP}_{1}}\xb7{\overrightarrow{u}}_{Lshank}}{\parallel {\overrightarrow{n}}_{{TP}_{1}}\parallel \parallel {\overrightarrow{u}}_{Lshank}\parallel},\end{array}$$

${\theta}_{Lthigh}$  The angle between the left thigh and a translated transverse plane (${TP}_{2}$) that passes through the left knee joint position, where the left thigh is defined as a line in the space that passes through the left hip and left knee joint positions. 
$$\begin{array}{c}\hfill {\theta}_{Lthigh}=\mathrm{arcsin}\frac{{\overrightarrow{n}}_{{TP}_{2}}\xb7{\overrightarrow{u}}_{Lthigh}}{\parallel {\overrightarrow{n}}_{{TP}_{2}}\parallel \parallel {\overrightarrow{u}}_{Lthigh}\parallel},\end{array}$$

${\theta}_{Lknee}$  The angle between the left thigh and left shank. 
$$\begin{array}{c}\hfill {\theta}_{Lknee}={\theta}_{Lthigh}{\theta}_{Lshank}\end{array}$$

${\theta}_{Rshank}$  The angle between the right shank and a translated transverse plane (${TP}_{3}$) that passes through the right ankle joint position, where the right shank is defined as the line in the space that passes through the right ankle and right knee joint positions. 
$$\begin{array}{c}\hfill {\theta}_{Rshank}=\mathrm{arcsin}\frac{{\overrightarrow{n}}_{{TP}_{3}}\xb7{\overrightarrow{u}}_{Rshank}}{\parallel {\overrightarrow{n}}_{{TP}_{3}}\parallel \parallel {\overrightarrow{u}}_{Rshank}\parallel},\end{array}$$

${\theta}_{Rthigh}$  The angle between the right thigh and a translated transverse plane (${TP}_{4}$) that passes through the right knee joint position, where the right thigh is defined as the line in the space that passes through the right hip and right knee joint positions. 
$$\begin{array}{c}\hfill {\theta}_{Rthigh}=\mathrm{arcsin}\frac{{\overrightarrow{n}}_{{TP}_{4}}\xb7{\overrightarrow{u}}_{Rthigh}}{\parallel {\overrightarrow{n}}_{{TP}_{4}}\parallel \parallel {\overrightarrow{u}}_{Rthigh}\parallel},\end{array}$$

${\theta}_{Rknee}$  The angle between the right thigh and right shank. 
$$\begin{array}{c}\hfill {\theta}_{Rknee}={\theta}_{Rthigh}{\theta}_{Rshank}\end{array}$$

${\theta}_{trunck}$  The angle between the trunk and the transverse plane ($TP$), where the truck is defined as the line in the space that passes through the the hip center and shoulder center joint positions. 
$$\begin{array}{c}\hfill {\theta}_{trunk}=\mathrm{arcsin}\frac{{\overrightarrow{n}}_{TP}\xb7{\overrightarrow{u}}_{trunk}}{\parallel {\overrightarrow{n}}_{TP}\parallel \parallel {\overrightarrow{u}}_{trunk}\parallel},\end{array}$$

${\theta}_{Lhip}$  The angle between the trunk and left thigh. 
$$\begin{array}{c}\hfill {\theta}_{Lhip}={\theta}_{Lthigh}{\theta}_{trunk}\end{array}$$

${\theta}_{Rhip}$  The angle between the trunk and the right thigh. 
$$\begin{array}{c}\hfill {\theta}_{Rhip}={\theta}_{Rthigh}{\theta}_{trunk}\end{array}$$

Activity  Precision  Recall  F1Measure 

Walking  95.0%  94.4%  94.7% 
Sitting  92.1%  90.1%  91.1% 
Falling from sitting  97.0%  96.5%  96.7% 
Falling from standing  96.1%  95.3%  95.7% 
Overall average  95.1%  92.8%  94.6% 
Activity  Precision  Recall  F1Measure  Average Gap’s Length (Frames) 

Walking  90.8%  81.5%  85.9%  32 
Sitting  84.0%  85.0%  84.5%  37 
Falling from sitting  76.2%  79.0%  77.6%  34 
Falling from standing  77.0%  78.4%  77.7%  38 
Overall average  82.0%  81.0%  81.4%  35 
Activity  Precision  Recall  F1Measure  Average Gap’s Length (Frames) 

Walking  79.7%  81.2%  80.4%  29 
Sitting  69.3%  80.0%  74.3%  35 
Falling from sitting  71.5%  74.0%  72.7%  32 
Falling from standing  61.8%  68.0%  64.7%  36 
Overall average  70.6%  75.8%  73.0%  33 
Activity  Precision  Recall  F1Measure  Average Gap’s Length (Frames) 

Walking  86.0%  72.7%  78.8%  21 
Sitting  76.2%  85.0%  80.4%  27 
Falling from sitting  82.1%  84.0%  83.0%  24 
Falling from standing  73.2%  82.0%  77.3%  22 
Overall average  79.4%  80.9%  79.9%  24 
Activity  Precision  Recall  F1Measure  Average Length of the First Temporal Gap (Frames)  Average Length of the Second Temporal Gap (Frames) 

Walking  63.3%  52.7%  57.5%  18  16 
Sitting  64.4%  62.0%  68.7%  22  27 
Falling from sitting  57.0%  66.0%  61.1%  20  18 
Falling from standing  54.2%  64.0%  58.7%  17  19 
Overall average  59.7%  61.2%  61.5%  19  20 
Activity  Precision  Recall  F1Measure  Average Length of the First Temporal Gap (Frames)  Average Length of the Second Temporal Gap (Frames) 

Walking  78.3%  73.6%  75.9%  10  9 
Sitting  73.1%  72.0%  72.5%  11  12 
Falling from sitting  71.6%  76.3%  73.9%  13  10 
Falling from standing  72.2%  78.1%  75.0%  12  11 
Overall average  73.8%  75.0%  74.3%  11  10 
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).