41:6·C.Wang et al. Antenna pattern X RFID Tag RFID antenna of the dipole tag 8 6 .90 45 0 45 90 Rotation angle ( (a)Experiment deployment of examining the influence of tag orientation (b)Phase value increases according to the change of tag orientation Fig.3.Phase VS tag orientation:phase value is related to the tag orientation. like wearable sensors.Compared with the vision-based approaches,RF-Kinect can efficiently filter out the other users from the tag IDs and usually work well when some objects block the line of sight path. As shown in Figure 2,RF-Kinect utilizes a dual-antenna RFID reader to continuously scan the wearable RFID tags attached on the user(e.g.,on the clothes)for the body movement tracking.In actual applications,we embed the tags into the T-shirts with fixed positions to avoid the complicated configurations.The changes of user's posture(e.g.,the arm rotation)lead to the displacement and rotation of the wearable tags accordingly,thus producing unique RF signal patterns.Specifically,during each scan of RFID reader,the RF signals collected from multiple wearable RFID tags are combined to estimate the orientation of each limb(e.g.,the upper arm,lower arm)and the position of each joint(e.g.,the elbow,wrist),and finally the body posture will be successfully reconstructed.By concatenating the body postures derived from multiple scans,the entire body movement will be uniquely determined.To get rid of the exhaustive training efforts on covering all body movements,a training-free framework thus is essentially needed to reduce the complexity of the body movement tracking. 3.2 Preliminaries In order to track the body movement,we need to identify some reliable RF signal features on distinguishing different postures and corresponding changes.There are several RF signal features,such as phase,RSSI and reading rate,available from the RFID system.According to the recent studies [21,28,35,41],the phase information is proved to be an optimal choice than other features for the localization and many other sensing applications In particular,the phase indicates the offset degree of the received signal from the original transmitting signal, ranging from 0 to 360 degrees.Assuming the distance between the antenna and tag is d,the phase 0 can be represented as: 8=(2m24 +aeu)mod2m, (1) where A is the wavelength,and de is the system noise caused by factorial imperfection of tags. Since the body movement can unavoidably lead to the change of the tag orientation in 3D space,we first conduct controlled experiments to study the influence of the tag orientation on the phase as illustrated in Figure 3(a). The RFID tag spins 180 on a fixed spot along three different axes before an RFID reader,and the corresponding phase change is presented in Figure 3(b).We find that the phase changes linearly as the tag rotating along the X-axis,and remains stable along the Z-axis.When rotating along the Y-axis,most phase are similar except the perpendicular direction(i.e.,-90and 90).We observe the similar phase variation trend when conducting the same experiment with different tags placed at different locations.The reason behind such phenomenon is that the RFID tags are commonly equipped with the linear-polarized dipole antenna,while the reader antenna works Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies,Vol.2,No.1,Article 41.Publication date:March 2018
41:6 • C. Wang et al. RFID antenna RFID Tag x y z x y z Antenna pattern of the dipole tag (a) Experiment deployment of examining the influence of tag orientation Rotation angle (°) -90 -45 0 45 90 Phase value (radian) 0 2 4 6 8 10 X Y Z (b) Phase value increases according to the change of tag orientation Fig. 3. Phase VS tag orientation: phase value is related to the tag orientation. like wearable sensors. Compared with the vision-based approaches, RF-Kinect can efficiently filter out the other users from the tag IDs and usually work well when some objects block the line of sight path. As shown in Figure 2, RF-Kinect utilizes a dual-antenna RFID reader to continuously scan the wearable RFID tags attached on the user (e.g., on the clothes) for the body movement tracking. In actual applications, we embed the tags into the T-shirts with fixed positions to avoid the complicated configurations. The changes of user’s posture (e.g., the arm rotation) lead to the displacement and rotation of the wearable tags accordingly, thus producing unique RF signal patterns. Specifically, during each scan of RFID reader, the RF signals collected from multiple wearable RFID tags are combined to estimate the orientation of each limb (e.g., the upper arm, lower arm) and the position of each joint (e.g., the elbow, wrist), and finally the body posture will be successfully reconstructed. By concatenating the body postures derived from multiple scans, the entire body movement will be uniquely determined. To get rid of the exhaustive training efforts on covering all body movements, a training-free framework thus is essentially needed to reduce the complexity of the body movement tracking. 3.2 Preliminaries In order to track the body movement, we need to identify some reliable RF signal features on distinguishing different postures and corresponding changes. There are several RF signal features, such as phase, RSSI and reading rate, available from the RFID system. According to the recent studies [21, 28, 35, 41], the phase information is proved to be an optimal choice than other features for the localization and many other sensing applications. In particular, the phase indicates the offset degree of the received signal from the original transmitting signal, ranging from 0 to 360 degrees. Assuming the distance between the antenna and tag is d, the phase θ can be represented as: θ = (2π 2d λ + θdev ) mod 2π, (1) where λ is the wavelength, and θdev is the system noise caused by factorial imperfection of tags. Since the body movement can unavoidably lead to the change of the tag orientation in 3D space, we first conduct controlled experiments to study the influence of the tag orientation on the phase as illustrated in Figure 3(a). The RFID tag spins 180◦ on a fixed spot along three different axes before an RFID reader, and the corresponding phase change is presented in Figure 3(b). We find that the phase changes linearly as the tag rotating along the X-axis, and remains stable along the Z-axis. When rotating along the Y-axis, most phase are similar except the perpendicular direction (i.e., −90◦ and 90◦ ). We observe the similar phase variation trend when conducting the same experiment with different tags placed at different locations. The reason behind such phenomenon is that the RFID tags are commonly equipped with the linear-polarized dipole antenna, while the reader antenna works Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 2, No. 1, Article 41. Publication date: March 2018
RF-Kinect:A Wearable RFID-based Approach Towards 3D Body Movement Tracking.41:7 3.5 PDT peak RFID RFID Tags antenna 2 3 4 5 6 Time(s) (a)A user stretches the arm forward with two tags at-(b)PDT trend when the user stretches the arm forward tached on the arm Fig.4.Preliminary study of PDT trend when the tag-attached user stretches the arm forward. at the circular polarization mode,which is compatible with two perpendicular polarizations with 90 phase difference,to identify the tags of different orientations.The rotation of RFID tag along the X-axis will change the polarization of the dipole antenna,and thereby affect the phase measurements due to the changes on the electromagnetic coupling between the tag and reader antenna.As a result,rotating 180 along the X-axis leads to 2m changes on the phase measurement,whereas rotating along the Y or Z-axis leads to no phase change due to the constant electromagnetic coupling.Note that when the tag rotates 90 along Y-axis,the tag faces the direction of the minimum gain on the RFID reader antenna,thus producing erroneous phase measurements due to multi-path effects.The above observation implies that the tag orientation has a significant impact on the extracted phase,and results in the ambiguity on body movement tracking. In order to eliminate such negative impacts from tag orientation changes,we utilize the Phase Difference between Tags(PDT)to track the body movement.Specifically,we deploy multiple RFID tags in parallel and then measure the phase difference between these tags.Since all the tags have the same phase offset due to the consistent orientation,we can cancel such phase offset via the phase difference between different tags.Moreover, even though the tags have slight different orientations due to deployment or manufacturing error,the phase offset due to the rotation can still be canceled,because these tags are under the same rotation and hence have the same phase offset.To validate its effectiveness,we further conduct another experiment with the posture where the user stretches his/her arm forward as shown in Figure 4(a).Two tags with the same orientation are attached on the lower arm with fixed distance and the corresponding phase difference(PDT)is presented in Figure 4(b). We find that the PDT first increases and then slightly decreases.It coincides with the varying trend of distance difference(the difference between the red and yellow line)with respect to the two tags for one antenna,which is small at first and increases to the maximum value as the arm points to the antenna(e.g.,the blue dash line). Therefore,it is feasible to track the body movement based on the PDT. 3.3 Challenges To achieve the goal of the accurate and training-free body movement tracking with minimum hardware support, we identify three key challenges as follows: Tracking with a Dual-antenna RFID Reader.Given the dual-antenna RFID reader to facilitate the mini- mum hardware requirement,it is a challenging task to track the human body movement.Existing RFID-based localization methods(e.g.,[26,41])are not applicable for our problem,since they require at least three antennas Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies,Vol.2,No.1,Article 41.Publication date:March 2018
RF-Kinect: A Wearable RFID-based Approach Towards 3D Body Movement Tracking • 41:7 !"#$ %&'(&&% !"#$ )%*+ (a) A user stretches the arm forward with two tags attached on the arm Time (s) 0 1 2 3 4 5 6 PDT (radian) 1 1.5 2 2.5 3 3.5 4 PDT peak (b) PDT trend when the user stretches the arm forward Fig. 4. Preliminary study of PDT trend when the tag-attached user stretches the arm forward. at the circular polarization mode, which is compatible with two perpendicular polarizations with 90◦ phase difference, to identify the tags of different orientations. The rotation of RFID tag along the X-axis will change the polarization of the dipole antenna, and thereby affect the phase measurements due to the changes on the electromagnetic coupling between the tag and reader antenna. As a result, rotating 180◦ along the X-axis leads to 2π changes on the phase measurement, whereas rotating along the Y or Z-axis leads to no phase change due to the constant electromagnetic coupling. Note that when the tag rotates 90◦ along Y-axis, the tag faces the direction of the minimum gain on the RFID reader antenna, thus producing erroneous phase measurements due to multi-path effects. The above observation implies that the tag orientation has a significant impact on the extracted phase, and results in the ambiguity on body movement tracking. In order to eliminate such negative impacts from tag orientation changes, we utilize the Phase Difference between Tags (PDT) to track the body movement. Specifically, we deploy multiple RFID tags in parallel and then measure the phase difference between these tags. Since all the tags have the same phase offset due to the consistent orientation, we can cancel such phase offset via the phase difference between different tags. Moreover, even though the tags have slight different orientations due to deployment or manufacturing error, the phase offset due to the rotation can still be canceled, because these tags are under the same rotation and hence have the same phase offset. To validate its effectiveness, we further conduct another experiment with the posture where the user stretches his/her arm forward as shown in Figure 4(a). Two tags with the same orientation are attached on the lower arm with fixed distance and the corresponding phase difference (PDT) is presented in Figure 4(b). We find that the PDT first increases and then slightly decreases. It coincides with the varying trend of distance difference (the difference between the red and yellow line) with respect to the two tags for one antenna, which is small at first and increases to the maximum value as the arm points to the antenna (e.g., the blue dash line). Therefore, it is feasible to track the body movement based on the PDT. 3.3 Challenges To achieve the goal of the accurate and training-free body movement tracking with minimum hardware support, we identify three key challenges as follows: Tracking with a Dual-antenna RFID Reader. Given the dual-antenna RFID reader to facilitate the minimum hardware requirement, it is a challenging task to track the human body movement. Existing RFID-based localization methods (e.g., [26, 41]) are not applicable for our problem, since they require at least three antennas Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 2, No. 1, Article 41. Publication date: March 2018
41:8·C.Wang et al.. or moving antennas to locate a target tag in 2D environment.Other related studies such as [27]and [29]can locate a tagged object with two antennas or only one antenna,but they only work in 2D plane and they only track one object by attaching one or more tags on it.For the complex body movement in 3D space,it is not applicable to the motion tracking in our application scenario.Thus,a dual-antenna-based solution needs to be proposed to facilitate the 3D body movement tracking Imperfect Phase Measurements.Unlike previous RFID-based localization studies [35,41]that track the tag movement in 2D space,our work aims to achieve a more challenging goal,i.e.,tracking the movement in 3D space.So it poses even higher requirements on the phase measurements.There are multiple factors that may affect the uniqueness and accuracy of phase measurements related to the body movement.According to our preliminary study,the phase change of the RF signal is determined by both the tag-antenna distance and the tag orientation.Moreover,both the water-rich human body and the muscle deformation during the body movement may also affect the phase measurements from RF signals.All the above factors together make it much harder to track the human body movement in 3D space leveraging the phase information in RF-signals Training-free Body Movement Tracking.Existing studies on gesture tracking usually spend significant efforts on training the classification model by asking the users to perform each specific gesture multiple times [13, 25].However,the number of gestures that can be recognized highly relies on the size of the training set,so the scalability to unknown gestures is greatly thwarted.Some other methods [29,31,35]are designed to recover the trace of a rigid body(e.g.,the finger and box)from the signal models,but they are not suitable for the complex human body,which consists of several rigid bodies.In order to identify diverse gestures or postures flexibly of the complex human body,it is critical to develop a body movement tracking system that does not rely on any training dataset. 4 SYSTEM DESIGN In this section,we first introduce the architecture of our RF-Kinect system,and then present the building modules of RF-Kinect for tracking the 3D body movement. 4.1 System Architecture The basic idea of RF-Kinect is to derive the body posture in each scanning round by analyzing the RF signals from the wearable RFID tags attached on the limbs and chest,and then reconstruct the body movement from a series of body postures in consecutive scans.Figure 5 illustrates the architecture of RF-Kinect.We first extract the phase information of M RFID tags from two antennas in consecutive scanning rounds as Phase Stream,where all the attached tags are read in each scanning round.Then the system is initialized by requiring the user to stand still with his/her arms hanging down naturally.As a perfect rigid object,the tags on the chest enable Body Position/Orientation Estimation module to determine the position and facing orientation of the user relative to the antennas based on a model-based approach in the previous work(e.g.,[21]).Then,Coordinate Transformation module converts the relative positions of the antennas into the Skeleton Coordinate System(SCS),which is defined based on the human body geometric structure in Section 4.2,so that the coordinates of both the tags and antennas could be expressed properly.Based on the coordinates of the antennas and tags attached on the user body when the user stands still,the theoretical phase value of each tag is calculated from Eq.(1).Phase Deviation Elimination module then computes the phase offset between the theoretical and the measured phase value,which is used to eliminate the phase deviation in the following biased phase stream. After the above preprocessing,Phase Difference Extraction module extracts two phase related features from the RF signal measurements in each scanning round:(i)Phase Difference between any two Tags(PDT)attached to the same part of a limb(e.g.,the upper arm),and (ii)Phase Difference between the two Antennas(PDA)of the same tag.The two phase related features are then utilized to estimate the limb postures based on the 3D Limb Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies,Vol.2,No.1,Article 41.Publication date:March 2018
41:8 • C. Wang et al. or moving antennas to locate a target tag in 2D environment. Other related studies such as [27] and [29] can locate a tagged object with two antennas or only one antenna, but they only work in 2D plane and they only track one object by attaching one or more tags on it. For the complex body movement in 3D space, it is not applicable to the motion tracking in our application scenario. Thus, a dual-antenna-based solution needs to be proposed to facilitate the 3D body movement tracking. Imperfect Phase Measurements. Unlike previous RFID-based localization studies [35, 41] that track the tag movement in 2D space, our work aims to achieve a more challenging goal, i.e., tracking the movement in 3D space. So it poses even higher requirements on the phase measurements. There are multiple factors that may affect the uniqueness and accuracy of phase measurements related to the body movement. According to our preliminary study, the phase change of the RF signal is determined by both the tag-antenna distance and the tag orientation. Moreover, both the water-rich human body and the muscle deformation during the body movement may also affect the phase measurements from RF signals. All the above factors together make it much harder to track the human body movement in 3D space leveraging the phase information in RF-signals. Training-free Body Movement Tracking. Existing studies on gesture tracking usually spend significant efforts on training the classification model by asking the users to perform each specific gesture multiple times [13, 25]. However, the number of gestures that can be recognized highly relies on the size of the training set, so the scalability to unknown gestures is greatly thwarted. Some other methods [29, 31, 35] are designed to recover the trace of a rigid body (e.g., the finger and box) from the signal models, but they are not suitable for the complex human body, which consists of several rigid bodies. In order to identify diverse gestures or postures flexibly of the complex human body, it is critical to develop a body movement tracking system that does not rely on any training dataset. 4 SYSTEM DESIGN In this section, we first introduce the architecture of our RF-Kinect system, and then present the building modules of RF-Kinect for tracking the 3D body movement. 4.1 System Architecture The basic idea of RF-Kinect is to derive the body posture in each scanning round by analyzing the RF signals from the wearable RFID tags attached on the limbs and chest, and then reconstruct the body movement from a series of body postures in consecutive scans. Figure 5 illustrates the architecture of RF-Kinect. We first extract the phase information of M RFID tags from two antennas in consecutive scanning rounds as Phase Stream, where all the attached tags are read in each scanning round. Then the system is initialized by requiring the user to stand still with his/her arms hanging down naturally. As a perfect rigid object, the tags on the chest enable Body Position/Orientation Estimation module to determine the position and facing orientation of the user relative to the antennas based on a model-based approach in the previous work (e.g., [21]). Then, Coordinate Transformation module converts the relative positions of the antennas into the Skeleton Coordinate System (SCS), which is defined based on the human body geometric structure in Section 4.2, so that the coordinates of both the tags and antennas could be expressed properly. Based on the coordinates of the antennas and tags attached on the user body when the user stands still, the theoretical phase value of each tag is calculated from Eq. (1). Phase Deviation Elimination module then computes the phase offset between the theoretical and the measured phase value, which is used to eliminate the phase deviation in the following biased phase stream. After the above preprocessing, Phase Difference Extraction module extracts two phase related features from the RF signal measurements in each scanning round: (i) Phase Difference between any two Tags (PDT) attached to the same part of a limb (e.g., the upper arm), and (ii) Phase Difference between the two Antennas (PDA) of the same tag. The two phase related features are then utilized to estimate the limb postures based on the 3D Limb Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 2, No. 1, Article 41. Publication date: March 2018
RF-Kinect:A Wearable RFID-based Approach Towards 3D Body Movement Tracking.41:9 Phase Stream Phase Calibration {⑥1(m,a),6(m,a).,6(m,a)】 Body Position/ Tag ID:m∈[1,M] Orientation Estimation Antenna ID:a E [1,2] 盲 Coordinate Phase Difference Extraction Transformation Phase Difference Phase Difference between between Tags(PDT) Antennas(PDA) Phase Deviation Elimination Human Body Body Posture Estimation Geometric 3DLmb Upper Arm Thigh Model Orientation Relative Posture Posture Estimation Distance- 官 based Theoretical AoA-based Orientation Lower Arm Shank Phase Values Orientation Filter Posture Posture Refinement Constraint of the Next Recognized Body Posture Stream Body Posture based on RF-Kinect {b,b2,-,b) Kalman Filter ----------- 3D Body Movement Tracking Fig.5.System architecture of training-free RF-Kinect. Orientation estimation,AoA-based Orientation Refinement and Relative Distance-based Orientation Calibration methods in the Body Posture Estimation.The first two methods determine the limb postures by comparing the extracted PDT/PDA with the theoretical PDT/PDA derived from the Human Body Geometric Model.Moreover. Relative Distance-based Orientation Filter removes the impossible orientations by measuring the relationship between different skeletons,which shrinks the searching range in the orientation estimation.Specifically,the arm posture estimation starts from the upper arm to the lower arm,while the leg posture estimation follows the order from the thigh to the shank.Then,the individual postures estimated from multiple scanning rounds construct a Recognized Body Posture Stream to represent the Body Movement,which is further smoothed with Kalman Filter.After smoothing,the positions of each tag can be calculated from the estimated body posture.Then we compute the theoretical phase values of each tag and extract the theoretical PDA as the constraint condition to calibrate the next body posture estimation.Finally,the 3D body movement is reconstructed accordingly,and can be applied to many interesting applications,such as gaming,healthcare,etc. 4.2 Human Body Geometric Model Before diving into the details of the body movement tracking,we first introduce the human body geometric model in RF-Kinect.Inspired by the robotics studies,which model the human arm into 7 rotational degrees of freedom(DoF)[24].we use 4 of 7 DoFs to model the joints of a single arm by ignoring the other 3 DoFs on the wrist,which beyond the detection capability of our system.Similarly,we extend the method to model the joints of the leg with 3 DoFs. Figure 6 illustrates the model of the right side of human body,while the left side follows a similar model.We use red and blue arrows to indicate the DoFs on the arm and leg.respectively.Specifically,,2 and 3 are the 3 DoFs on the shoulder joint,which correspond to flexion/extension,abduction/adduction and internal/external rotation of the shoulder,respectively,and represents flexion/extension on the elbow joint.Here,=2=3==0 refers to the posture where the arm is naturally hanging down with the palm facing forward.When the user lifts up one arm with bent elbow as shown in Figure 6,both o and will change accordingly.Specifically, Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies,Vol.2,No.1,Article 41.Publication date:March 2018
RF-Kinect: A Wearable RFID-based Approach Towards 3D Body Movement Tracking • 41:9 !"#$ %"&'('")* +,'-)(.('") /&('0.('") 123!"#$ 4"5-0-)( 6,.78')9 %:.&- 2-5'.('") /;'0').('") !"#$% &'((%)%*+% ,-.)#+.'/* %:.&- 2'<<-,-)7- =-(>--) 6.9& ?%26@ %:.&- 2'<<-,-)7- =-(>--) A)(-)).& ?%2A@ B-7"9)'C-# !"#$ %"&(D,- E(,-.0 !"#$ "%$ & $ "'( F")&(,.')(3"<3(:-3G-H(3 !"#$3%"&(D,-3=.&-#3")3 I.;0.) J';(-, %:.&- E(,-.0 !"#$%01#2'3)#.'/* !)*# +$ , $ )*% +$ , $ & $ )' - .+$ ,/( !"# $%& + 0 1$ 2 '()*((" $%& , 0 31$45 A"AK=.&-# +,'-)(.('") B-<')-0-)( LMM-, A,0 %"&(D,- N">-, A,0 %"&(D,- 4/56 !/$.7)% ,$.'8#.'/* BJKI')-7( OD0.) !"#$ P-"0-(,'73 4"#-; F"",#').(- 6,.)&<",0.('") 6:'9: %"&(D,- E:.)8 %"&(D,- 6:-",-('7.; %:.&- Q.;D-& 12 N'0= +,'-)(.('") /&('0.('") B-;.('5- 2'&(.)7-K =.&-# +,'-)(.('") J';(-, Fig. 5. System architecture of training-free RF-Kinect. Orientation estimation, AoA-based Orientation Refinement and Relative Distance-based Orientation Calibration methods in the Body Posture Estimation. The first two methods determine the limb postures by comparing the extracted PDT/PDA with the theoretical PDT/PDA derived from the Human Body Geometric Model. Moreover, Relative Distance-based Orientation Filter removes the impossible orientations by measuring the relationship between different skeletons, which shrinks the searching range in the orientation estimation. Specifically, the arm posture estimation starts from the upper arm to the lower arm, while the leg posture estimation follows the order from the thigh to the shank. Then, the individual postures estimated from multiple scanning rounds construct a Recognized Body Posture Stream to represent the Body Movement, which is further smoothed with Kalman Filter. After smoothing, the positions of each tag can be calculated from the estimated body posture. Then we compute the theoretical phase values of each tag and extract the theoretical PDA as the constraint condition to calibrate the next body posture estimation. Finally, the 3D body movement is reconstructed accordingly, and can be applied to many interesting applications, such as gaming, healthcare, etc. 4.2 Human Body Geometric Model Before diving into the details of the body movement tracking, we first introduce the human body geometric model in RF-Kinect. Inspired by the robotics studies, which model the human arm into 7 rotational degrees of freedom (DoF) [24], we use 4 of 7 DoFs to model the joints of a single arm by ignoring the other 3 DoFs on the wrist, which beyond the detection capability of our system. Similarly, we extend the method to model the joints of the leg with 3 DoFs. Figure 6 illustrates the model of the right side of human body, while the left side follows a similar model. We use red and blue arrows to indicate the DoFs on the arm and leg, respectively. Specifically, ϕ1, ϕ2 and ϕ3 are the 3 DoFs on the shoulder joint, which correspond to flexion/extension, abduction/adduction and internal/external rotation of the shoulder, respectively, and ϕ4 represents flexion/extension on the elbow joint. Here, ϕ1 = ϕ2 = ϕ3 = ϕ4 = 0 ◦ refers to the posture where the arm is naturally hanging down with the palm facing forward. When the user lifts up one arm with bent elbow as shown in Figure 6, both ϕ1 and ϕ4 will change accordingly. Specifically, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 2, No. 1, Article 41. Publication date: March 2018
41:10·C.Wang et al,. Fig.6.Human body geometric model with rotation angle on each joint. represents the lift angle of the upper arm and represents the bent angle of the elbow.In fact,and together determine the orientation of upper arm in 3D space,while 3 and are for the lower arm orientation It is interesting that the DoF on the shoulder,does not affect the orientation of the upper arm,but the lower arm instead,because measures the angle of internal/external rotation around the arm.Further,,and are the 3 DoFs on the leg.Since the lower body can be modeled in a similar way as the upper body,we will focus on the upper body to demonstrate the human body geometric model. Given the length of upper arm lu and lower arm l,the positions of elbow and wrist are determined by the rotation values of 2,3 and 4 in the Skeleton Coordinate System(SCS).Figure 6 illustrates the SCS in our system-the plane where the user's torso locates(i.e.,the chest)serves as the XZ plane,and the line emanating from the shoulder in the frontward direction indicates the Y axis in the SCS.The midpoint between two feet is the origin of the SCS.Therefore,according to the mechanism model with Denavit-Hartenberg transformation [12], we can express the posture of the arm by calculating the position of elbow and wrist with 2,3 and 4.For example,since the orientation of upper arm is determined by and 2,the position of elbow pe is a function of 1,2 and lu as follows: sin 2 Pe Ps f(o1,2 lu)=ps lu cos 2 X sini (2) cos o1 x cos o where ps represents the position of shoulder and function f()calculates the vector pointing from the shoulder to the elbow.Similarly,the position of wrist pw can be represented as: Pw =pe +g(o1,2,3.1), (3) where g()computes the vector pointing from the elbow to the wrist.! 4.3 Body Posture Estimation As the core module of RF-Kinect system,three key techniques for estimating the body posture,3D Limb Orientation Estimation,AoA-based Orientation Refinement and Relative Distance-based Orientation Calibration,are proposed in this subsection. IThe details of function g()can be found in [12]. Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies,Vol.2,No.1,Article 41.Publication date:March 2018
41:10 • C. Wang et al. ! " # $ % & ' $ ' ( ) * ( * Fig. 6. Human body geometric model with rotation angle on each joint. ϕ1 represents the lift angle of the upper arm and ϕ4 represents the bent angle of the elbow. In fact, ϕ1 and ϕ2 together determine the orientation of upper arm in 3D space, while ϕ3 and ϕ4 are for the lower arm orientation. It is interesting that ϕ3, the DoF on the shoulder, does not affect the orientation of the upper arm, but the lower arm instead, because ϕ3 measures the angle of internal/external rotation around the arm. Further, ϕ5,ϕ6 and ϕ7 are the 3 DoFs on the leg. Since the lower body can be modeled in a similar way as the upper body, we will focus on the upper body to demonstrate the human body geometric model. Given the length of upper arm lu and lower arm ll , the positions of elbow and wrist are determined by the rotation values of ϕ1, ϕ2, ϕ3 and ϕ4 in the Skeleton Coordinate System (SCS). Figure 6 illustrates the SCS in our system - the plane where the user’s torso locates (i.e., the chest) serves as the XZ plane, and the line emanating from the shoulder in the frontward direction indicates the Y axis in the SCS. The midpoint between two feet is the origin of the SCS. Therefore, according to the mechanism model with Denavit-Hartenberg transformation [12], we can express the posture of the arm by calculating the position of elbow and wrist with ϕ1, ϕ2, ϕ3 and ϕ4. For example, since the orientation of upper arm is determined by ϕ1 and ϕ2, the position of elbow pe is a function of ϕ1, ϕ2 and lu as follows: pe = ps + f (ϕ1,ϕ2,lu ) = ps + lu © « sinϕ2 cosϕ2 × sinϕ1 − cosϕ1 × cosϕ2 ª ® ¬ , (2) where ps represents the position of shoulder and function f (·) calculates the vector pointing from the shoulder to the elbow. Similarly, the position of wrist pw can be represented as: pw = pe + д(ϕ1,ϕ2,ϕ3,ϕ4,ll), (3) where д(·) computes the vector pointing from the elbow to the wrist.1 4.3 Body Posture Estimation As the core module of RF-Kinect system, three key techniques for estimating the body posture, 3D Limb Orientation Estimation, AoA-based Orientation Refinement and Relative Distance-based Orientation Calibration, are proposed in this subsection. 1The details of function д() can be found in [12]. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 2, No. 1, Article 41. Publication date: March 2018