International Journal of Control, Automation, and Systems (2012) 10(5):954-962 DOI 10.1007/s12555-012-0512-4
ISSN:1598-6446 eISSN:2005-4092 http://www.springer.com/12555
Walking Intent Detection Algorithm for Paraplegic Patients Using a Robotic Exoskeleton Walking Assistant with Crutches Junyoung Jung, Inhun Jang, Robert Riener, and Hyunsub Park* Abstract: This paper deals with a walking intent detection algorithm for paraplegic patients using a robotic exoskeleton walking assistant with crutches. User intent detection is one of the most important factors for a robotic exoskeleton to recognize a user’s intent regarding robot operation and in order to conduct proper operation. For a robotic exoskeleton walking assistant, walking intent detection means that the robot recognizes the best time to start walking behaviors and which behavior should be executed from the user’s intent. To enable this detection, an algorithm derived from a finite Moore Automaton for Robot Behavior (MARB) has been developed. Also, a walking stability criterion model is presented for use in the proposed algorithm as a trigger signal for walking. The proposed algorithm and the walking stability criterion modeling were verified using the robotic exoskeleton walking assistant, ROBIN. Keywords: Paraplegic patients, ROBIN, Robotic exoskeleton walking assistant, walking intent detection.
1. INTRODUCTION Recently, a lot of attention has been given to developing robotic exoskeletons. A robotic exoskeleton is a robot designed to be worn by the human body with segments and joints corresponding to those of the human body that wears it [1,2]. These exoskeletons are built for two purposes [2]. The first is to augment the power of a person. Soldiers who have to carry heavy equipment in battlefield situations need devices to aid them. Also, elderly people with weaker muscles need something to help them get around. A robotic exoskeleton could be of help in these types of situations. The second use for exoskeletons is rehabilitation for hemiplegic or paraplegic patients. These patients could become independently mobile again using a robotic exoskeleton. An important part of the technologies making up a robotic exoskeleton is human-robot interaction [3]. Robotic exoskeletons are like a second skin for humans. They are operated by their users and in turn they affect the user’s body through their frame. User intent detection __________ Manuscript received May 3, 2011; revised March 20, 2012; accepted May 31, 2012. Recommended by Editorial Board member Sooyeong Yi under the direction of Editor Hyouk Ryeol Choi. Junyoung Jung is with the School of Intelligent Robotics, University of Science and Technology, 176 Gajung-dong, 217 Gajungro Yuseong-gu, Daejeon 305-350, Korea (e-mail: paran1@ kitech.re.kr). Robert Riener is with the Rehabilitation Engineering at ETH Zurich, 1 Tannenstrasse, 8092, Zurich, Switzerland (e-mail:
[email protected]). Inhun Jang and Hyunsub Park are with the Applied Robot Technology R&D Group, Korea Institute of Industrial Technology, 1271-18, Sa-3-dong, Sangrok-gu, Ansan 426-910, Korea (e-mails: {inhuns, hsubpark}@kitech.re.kr). * Corresponding author. © ICROS, KIEE and Springer 2012
belonging to human-robot interaction is the method for the exoskeleton to use its sensor system to recognize the instructions of the user from the user’s behaviors, neural signals, or thoughts. User intent detection can be divided into two aspects: recognizing correct behaviors and correct time. For example, if the robot misunderstands what they are instructed to do, the user cannot control the robot and can even be injured by the robot. Furthermore, if the robot recognizes the user’s intent too slowly or too quickly, the user may feel discomfort. Therefore, user’s intent detection in the exoskeleton must avoid these pitfalls. HAL [4-6] and BLEEX [7-9] are well-known studies on robotic exoskeletons to augment human power. With HAL, researchers use electromyogram (EMG) sensors to detect user intent. BLEEX (Berkeley Dynamics) is used to assist soldiers in carrying equipment. This robot uses a force detection method. When soldiers move their legs, the robot detects this movement as torque by using precise models of the exoskeleton and the wearer [7]. The user simply moves their limbs normally, and since there is nothing wrong with their body, these methods can easily detect the variation of torque or neural signals and successfully understand the behavior and time information from the wearer’s intent. In exoskeletons for rehabilitation, however, the user intent detection methods previously described are not suitable. Hemiplegic patients cannot move one side of their body well. In addition, paraplegic patients cannot move their lower body themselves and their neural signals cannot reach their lower limbs. New approaches are needed for these types of patients. For hemiplegic patients, a single leg version of HAL employs an intent detection method via foot force sensors [10]. Nowadays, the focus of research on detecting user intentions for paraplegics has changed to using force
Walking Intent Detection Algorithm for Paraplegic Patients Using a Robotic Exoskeleton Walking Assistant with Crutches
sensors attached to the bottom of the robot’s foot. The robot, HAL, is able to assist paraplegic patients in walking with new approaches [11-13]. Forces in each foot are measured by Floor Reaction Force (FRF) sensors, and this information is used to detect a user’s intent. Another robotic exoskeleton walking assistant is ReWalk [14,15]. This robot uses tilt sensors and foot force sensors to detect a user’s intent. The common characteristic of these robotic exoskeleton walking assistants is that they need additional parts, like crutches, to maintain their balance when standing or walking. This device turns robotic locomotion into crutch walking. For crutch walking, simple conditions exist to enable walking. These conditions are the position of the tips of the crutch, the attitude of the crutches, and the force loaded onto each crutch and foot. If all conditions are satisfied, the robot can walk. If these conditions are not satisfied, the robot cannot walk or can lose balance. We refer to the time when conditions are satisfied as walking-enabled time. However, the previously described intent detection methods concerning walking only consider bipedal locomotion [11,12,14,15]. Although these intent detection techniques can detect a time, which we refer to as intent-detected time, that time might not coincide with the walking-enabled time. This paper proposes a user’s walking intent detection algorithm derived from a finite Moore Automaton for Robot Behavior (MARB) and a Walking Stability Criterion Model (WSCM) as a transition condition of states in MARB. The WSCM detects when the walkingenabled time coincides with the intent-detected time. When walking with crutches, a triangular area is generated on the ground by each crutch tip and stance foot. If the user’s center of mass is in this triangle area, the robot can walk safely because their weight is not loaded on the swing leg. We define this triangle area as a stable area. The WSCM assesses whether or not the mass center is in the stable area. The proposed algorithm and model are implemented in a robotic exoskeleton walking assistant and verified with a paraplegic patient. Section 2 of our paper introduces the specifications of ROBIN, the robotic walking assistant, and the organization of its sensors. Section 3 describes the proposed algorithm for walking intent detection. Section 4 shows the experimental results and verifies the performance of the proposed algorithm in ROBIN with a paraplegic patient. Finally, Section 5 contains our conclusions.
ing stairs. ROBIN is composed of four parts: mechanical, control, sensors, and a power pack. Fig. 1 shows the appearance of ROBIN. The exoskeletal frame, bends, joints and crutches comprise the mechanical parts. Torque from the actuators of the robot is translated to the lower body of the paraplegic to provide movement and support. There are hip, knee and ankle joints for each leg. Among these joints, the hip and knee are active and the ankle is passive using a Klenzak brace. Crutches are needed to maintain the balance of the robot while standing or walking. The control parts of ROBIN generate motor control commands derived from the user intent detected by the sensors. The main controller of ROBIN is an ultra mobile PC and two 2-axis motor controllers are attached to the thighs of the exoskeletal frame to control four motors. The sensors and the main controller communicate via a Control Area Network (CAN) and ZigBee wireless communication. The third part of ROBIN is the sensors. To detect user intent and monitor the states of a robot, two sorts of sensors are equipped in ROBIN. The first is a Force Sensing Register (FSR) and the second is an encoder. FSR can measure zero to ten kilograms and return linearized 1 byte measured values from zero to 255. If the value of FSR is one, it means 400g. A total of 18 FSRs are installed in ROBIN to estimate the variation of the mass center of the user. On the bottom of each foot, 8 FSRs exist to measure ground reaction forces. Also, each
Fig. 1. The ROBIN robotic exoskeleton and its backpack containing a main controller and 24v battery. 9
1
2.1. Introduction of ROBIN In order to rehabilitate and enhance the quality of life of paraplegic patients, the robotic exoskeleton, ROBIN has been developed. Paraplegics cannot move their lower body because of an injured section of their spine or other health issues. It is difficult for them to lead a normal life and is easier to experience additional health problems. To rehabilitate paraplegic patients, ROBIN has four main functions: walking, standing up, sitting down, and climb-
17
10
2
2. ROBIN, THE ROBOTIC EXOSKELETON WALKING ASSISTANT
955
11
3
Sensor board
12
4 5
13
Foot FSR Sensors
6
14 18
7
8
16
15
Crutch Tip FSR Sensor
Fig. 2. FSR sensors to measure ground reaction forces in the foot on the ROBIN robotic exoskeleton and in the tip of the crutch. Each number denotes each FSR sensor and its position.
956
Junyoung Jung, Inhun Jang, Robert Riener, and Hyunsub Park
crutch has one FSR in its tip. These FSR sensor configurations are inspired by several measurement methods for rehabilitation [16,17]. The detailed position of these FSRs is shown in Fig. 2. High resolution encoders are attached to each active joint. Using these encoders, ROBIN can estimate its posture. Finally, a 24v battery is used as a power pack. This battery is located in the back-pack with the main controller as shown in Fig. 1. 3. ROBIN’S WALKING INTENT DETECTION METHOD 3.1. Walking intent detection algorithm As described above, robotic exoskeleton walking assistants need to be able to detect both the behavior that the user wants to perform and the correct time to perform it. Paraplegic patients have additional limitations in moving and transmitting their neural signals. For that reason, it is difficult for robots to detect all of the motion intent of the lower limbs in detail, such as raising a thigh, bending a knee, or moving an ankle. Therefore, we must define unit-walking behaviors representing walking by compounding them for the robot. And when identifying the correct time to start walking, the robot must consider both the walking-enabled time and the intent-detected time. In order to let the robot recognize behavior and time from user intent, we developed a walking intent detection algorithm derived from MARB [18] in ROBIN. Using MARB, the unit walking behaviors of the robot were modeled as states of the robot. Transition function determined start time and proper unit behavior at that time. We assumed that if A was the walking intent detection algorithm, then the definition of MARB was as follows: A = (Q, ∑, Ω, δ , λ , q0 ),
(1)
where Q = Op×(C)* corresponds to the set of states. For any element x, the notation xq denotes that x belongs to the state q. - The input alphabet: Σ = V = (B)|H| - The output alphabet: Ω = Op - The transition function: δ:Q×V → Q δ ( q, v ) = q ', if ∃(ck , id k ) in q : id k and E ck , v = true q, otherwise
(2)
for q ∈ Q, v ∈ V . - The output function: λ : Q → Op : λ ( q) = op q = (cmd q , par q ), for q ∈ Q
- The initial state: q0 ∈ Q The user’s intent detection algorithm, A, could be modeled as a set in (1). Equation (1) is a set of sets and functions as described above, where Q is a set of states
of MARB. Where Op is the set of operation should be performed in state q and C is the set of conditions of transition. State q, element of the set Q, is defined as q ∈ Q : q = (id , op, (c1 , id1 ),..., (c q , id q )), where id is a unique identifier of the states. op = (cmd , par ) ∈ Op comprises a command and parameters where (c1 , id1 ) describes a possible transition of next states in this state. c1 is the condition of transition. The condition c is defined as c ::= true | false | (ck c j ) | f (v) ∈ C , where is ∈ { AND, OR} and ck, cj are conditions and f (v) is the function to judge that the inputs, v satisfy the any condition. id1 means the identifier of the next state where |q| denotes the number of conditions of state q. The output range of the 18 FSR sensor is B = {0,..., 255}. H = {h1 ,..., h18 } denotes each 18 FSR sensor. v = (v1 , ..., v18 ) ∈ V := B18 means the actual value of the FSR sensor. Output alphabet Ω is defined as op. In ROBIN, op is a walking behavior defined above. The main problems of user’s intent detection - time decision and behavior selection - are determined by the transition function δ. The transition function validates whether or not the condition is true. If the condition is true, the transition occurs to the change robot’s current state. The output function executes proper operation to the current state. q0 means the initial state. 3.2. State modeling for walking intent detection When developing an algorithm to detect user intent in walking, it is important to know what behavior should correspond to each user intent. Therefore, analysis of normal walking is conducted. In legacy studies of normal walking, researchers divide walking into two behaviors: stance and swing. These two behaviors can be further divided into eight sub-behaviors as done in [19]. In HAL, they also divided walking behaviors into two [11]: stance and swing. However, sub-behaviors are ignored because the researchers believe that these sub-behaviors are continuous in a walking pattern. So, their intent detection method focused on the start signal detection of swing and stance. In this paper, we noted that stance and swing occur simultaneously. Therefore, we focused on the shape and angle of the legs at the start and end of walking and classified three behaviors: first step walking, continuous step walking, and final step walking. Using these unit-walking behaviors, we defined the states which described the condition of the robot. A total of seven states were defined by the analysis of walking. The states of standing, ready to walk, walked and walking completed states were added to express the robot’s condition. The relation of state set Q is depicted in Fig. 3. User’s walking intent detection is conducted using these states and the transition of these states. The meaning of each state is as follows: - id q0 = Standing: this state means that the user is standing with the exoskeleton robot and crutches. - id q1 = Ready to walk: this state denotes that the user is preparing to walk and the robot starts to detect the
Walking Intent Detection Algorithm for Paraplegic Patients Using a Robotic Exoskeleton Walking Assistant with Crutches stm Walking Use Case State
957
Table 2. Conditions for the transition function. Standing
T ransi ti on 1
T ransi ti on 10
Fi nal
Ini ti al
T ransi ti on 2
T ransi ti on 9
Ready To Walk
First Step Walking T ransi ti on 3
T ransi ti on 4
Walked
T ransi ti on 5
Walking
T ransi ti on 6
T ransi ti on 7
Final Step Walking
Walking Completed T ransi ti on 8
Fig. 3. State diagram for walking intent detection. Table 1. Id and op of the states. State Q id op = (cmd, par) q0 Standing q1 Ready to walk q2 First step walking (first step, right or left leg) q3 Walked q4 Walking (continuous step, right or left leg) q5 Final step walking (final step, right or left leg) q6 Walking completed -
user’s intent. - id q2 = First step walking: in this state, the robot is moving its legs for the first-step. - id q3 = Walked: this state means that the robot has finished its walking behavior and is waiting for a signal to start the next step. - id q4 = Walking: this state denotes that the robot is playing the continuous-step walking behavior. - id q5 = Final step walking: in this state, the robot acts as final-step walking. - id q6 = Walking Completed: right after final-step walking, the state of the robot is translated to this state. The initial state q0 is standing. The output of each state, op is described in Table 1. When the state is changed by the transition function, the robot executes its operation using an output function. The operation is divided into two components: commands and parameters. There are three commands: first step, continuous step, and final step. Also, each command needs the parameters determining which leg will swing. The parameters are left or right. These parameters are determined by the previous step parameter. If the previous step parameter was left, the next step will be right. 3.3. Transition function and conditions In order to automatically change the state of the robot, the function described in (2) is used. The transition function δ determines the next state using input alphabets and conditions as inputs. The input alphabets are measured values of the FSR sensors of the robot. The
Combination of condition Next state Crutch Crutch Foot Mass id On Off-On On center shift (c1) (c2) (c3) detection (c4) Ready to First step True True True True walk walking First step True False True False Walked walking True True True True Walking Walked Final step True False True True walking Walking True False True False Walked Final step Walking True False True False walking completed Walking Standing completed Current State id
main controller executes the transition function to check whether or not the sensor value and condition are satisfied every fifteen milliseconds. The three unit walking behaviors could be classified by the movement of the crutches and the state of the robot. To perform the first step, the state of the robot should be ready to walk and the user moves the crutches forward. The continuous step executed after the first step is also conducted after the forward movement of the crutches. When the user wants to do the final step, the crutches don’t move forward but stay in that position. The transition function uses this crutches’ movement information and the detection result of the movement of a user’s mass center as a transition condition to start walking and to select proper walking behavior. The transition conditions for walking intent detection are as follows: 1) Crutch On: This condition denotes whether or not the weight of the user is distributed to the crutches. If the FSR value of the crutches is larger than a threshold, then this condition is true. Cl* > α thres and Cr* > α thres ,
(3)
where Cl* = h17 , Cr* = h18 . 2) Crutch Off-On: When the user raises the crutches, moves forward, and pushes on the ground, this condition is verified as true. This condition also related the crutch FSRs. Cl* > α thres and Cr* > α thres next, Cl* ≤ α thres and Cr* ≤ α thres next,
Cl*
> α thres and
Cr*
(4)
> α thres
3) Foot On: When each foot of the robot is on the ground, this condition is true. Fl* > βthres and Fr* > βthres , 8
16
i =1
i =9
(5)
where Fl* = ∑ hi , Fr* = ∑ hi . 4) Mass Center Shift Detection: This condition is the
958
Junyoung Jung, Inhun Jang, Robert Riener, and Hyunsub Park
detection result of the movement of a user’s mass center. We use WSCM to detect this. WSCM is described in detail in Subsection 3.4. The relation of conditions for transition function is described in Table 2. 3.4. Walking stability criterion modeling In the decision of the time when the robot should start walking behavior from the paraplegic’s intent, whether or not the robot can move safely at that time is important. Therefore, the time when the robot recognizes the user’s intent and the time when the robot can move safely should coincide. As described above in the introduction, the form of the walking of the robotic exoskeleton walking assistants is crutch walking. If the mass center of the user is in a triangular area generated by the crutches’ tip and stance foot and the robot starts conducting unit walking behavior at that time, the user won’t fall down. We refer to this triangular area as the stable area. In order to evaluate whether or not the mass center is in the stable area, a walking stability criterion model (WSCM) was developed. When the situation of walking is as shown in Fig. 4, the WSCM is used to recognize that the mass center M is in the stable area. The reference frame of WSCM is always on the Fl. In order to get a feasible calculation of the proposed model, some notations are defined: - Fl: the center position of left foot. - Fr: the center position of right foot. - Cl: the center position of left crutch. - Cr: the center position of right crutch. - F: the center of pressure (CoP) position of the area between the feet (left, right) - C: the center of pressure (CoP) position of the area between the crutches (left, right) - M: the center of pressure (CoP) position of the area between the crutches and feet - Forces: Fl*, Fr*, Cl* and Cr* (at Fl, Fr, Cl and Cr)
- Total foot reaction force: F*= Fl* + Fr* - Total crutch reaction force: C*= Cl* + Cr* - Total floor reaction force: M* = C* + F* Fr* * * F * , if Fr > Fl -x = * Fl , otherwise F * -y =C
*
M*
.
(6)
(7)
Where x is the ratio of the foot reaction forces value which is larger than another foot’s reaction forces to the total foot reaction force value and y is the ratio between the total floor reaction and the total crutch reaction force. This is assuming that all geometric relationships in Fig. 4. are known and Cl* and Cr* are equal. In a left continuous step walking situation, the user’s mass center M should be located in the triangular area which is generated by Cl, Cr and Fr to conduct safety walking. In order to know whether the mass center M is in the stable triangular area, the following inequation connecting Cl and Fr is this:
yM ( x, y ) > yFr +
yCl − yFr xCl − xFr
( xM ( x, y ) − xFr ),
(8)
where only the point M is unknown. In order to get point M, inserting the geometric parameters from Fig. 4, and simplifying yields yM ( x , y ) > h +
d ( xM ( x, y ) − h). q−c
(9)
The position M can be computed using the ratios x and y: xM yM xM yM
xF xC − xF , = + yF yC − yF h = x ⋅ , s
xC h + q − c 2 . = yC s + d
(10) (11) (12)
Inserting (10) to (12) into (9), the final inequation which has to be met for stability is: −2d −2d h − s x + ( − h) + s h + c h + c . y> h −2d −2d h − s x + (− ) + s + d h+c h+c 2
(13)
Equation (13) is the criterion for the robot to detect the variation in the wearer’s body when the wearer wants to walk. To know how a stable area is generated, we set the parameters of Fig. 4 and draw it as follows. Fig. 5 shows the stable area. Fig. 4. The geometric relationships of feet and crutches after a right step.
- Step size s = 25cm - Width between each foot h = 20cm
Walking Intent Detection Algorithm for Paraplegic Patients Using a Robotic Exoskeleton Walking Assistant with Crutches 1
Non-Stable Area Stable Area
0.9 0.8 0.7 0.6
Y0.5 0.4 0.3 0.2 0.1 0
0
0.1
0.2
0.3
0.4
0.5
X
0.6
0.7
0.8
0.9
1
Fig. 5. The stable area generated by WSCM and previously described parameters. - Distance between right foot and right crutch tip d = 40cm - Width between each crutch tip c = 60cm The predetermined parameters are step size s and width between each foot h. The s is determined by the gait pattern of robotic exoskeletons and the h is determined by the size of the exoskeletal frame. On the other hand, the parameters d and c are not preliminarily deterministic. These parameters are changing in each step by users. In order to verify WSCM modeling, the method in which the positions of the crutch tips are expressed on the ground is used. It is not possible in an unstructured environment but it is sufficient to verify the WSCM. In Fig. 5 the x-axis means the x of (6) and the y-axis means the y of (7). The grey area is the stable area. The exoskeleton could determine whether or not the user’s mass center is in the stable area by the relation of x and y. For example, a common weight distribution of the crutches and the foot in walking with crutches is 8:2. This means that the value of y in Fig. 5 is 0.2, so the safe value of x is 0.8. This means that if the distribution of weight of each foot is 2:8, then the robot can start to walk. In a user’s intent detection algorithm, this WSCM is used as the condition c4 of the transition function.
959
which had the same structure as ROBIN without actuators, and walked around. The focus of the experiment was whether or not a normal gait with crutches also starts when the mass center of the user is located in a stable area. We set the WSCM parameters with the described parameters in Subsection 3.4. Fig. 6 shows the results of the experiment. There are three graphs in Fig. 6. The first graph shows the result of the WSCM. The foot force ratio x of (6), the floor reaction force ratio y of (7), and a desired floor reaction force ratio Desired y are marked. The second graph is the value of FSR sensors of the experiment platform. The final graph shows encoder values. Using this final graph, we can see the start of walking. In the graphs, one tick is 50ms. Fig. 6 shows the result of 4 steps. In the graphs after time tick 50, the grey shadow shows right swing walking and the white area denotes left swing walking. In the time region 0 to 50, the subject is standing, so the value of x is near 0.5 and the value of y is 0. This means that the subject did not use crutches to balance while standing. The subject started to walk from time tick 50. In this situation, the subject started to use the crutches in order to distribute weight. After time tick 60, the subject moves the mass center. If the swing leg is completely off the ground, then the value of x is going to be 1. In a walking situation, whether or not the walker starts their walking in a stable area can be checked in the first graph. If the value of y is over the desired value, this means that a human’s mass center is located in a stable area. In this experiment, the left leg swing satisfies our 1 0.8
no ir eit 0.6 r C yit li 0.4 ba t S
X Y Desired Y
0.2 0 0
1200 1000 800
eu la v 600 R S F
50
100
150
200 250 Time [50ms]
300
350
400
450
100
150
200 250 Time [50ms]
300
350
400
450
100
150
200 250 Time [50ms]
300
350
400
450
RFF LFF RCF LCF
400 200
4. EXPERIMENTAL RESULTS AND DISCUSSION In order to verify the proposed algorithm and WSCM, two experiments are conducted as follows: 1) Verifying WSCM with a non-handicapped person. 2) Verifying a walking intent detection algorithm with a paraplegic patient.
0 0
60
40
g)e (d 20 el gn A
50
Right Hip Right Knee Left Hip Left Knee
0
4.1. Experiment to verify WSCM In order to verify the effectiveness of the walking stability criterion modeling of the proposed algorithm, we experimented with a typical male person. In this experiment, the subject wore an experimental device
-20 0
50
Fig. 6. Experiment result verifying WSCM with a common non-handicapped person.
960
Junyoung Jung, Inhun Jang, Robert Riener, and Hyunsub Park
modeling. However, the right leg swing starts about 0.5sec before the mass center is located in a stable area. 4.2. Experiment to verify walking intent detection algorithm In ROBIN, the proposed user’s intent detection algorithm is implemented and verified with a paraplegic patient. Fig. 7 shows the overall view of the experiment. To maintain safety in the experiment, there are several safety devices such as two parallel bars and hangers. Also, a doctor and a physiotherapist participated in this experiment. The experiment is conducted with a 70kg subject who is 176cm tall and who has been injured in the tenth thoracic vertebra (T10). The subject cannot completely move his lower body himself. The subject followed the instructions below to operate ROBIN: 1) The subject wearing the exoskeleton starts standing. 2) The subject moves the crutches simultaneously so that the tips of the crutches are located in the marked position on the ground. 3) The subject maintains his standing posture by distributing his weight to his crutches and feet. 4) In order to start walking, the subject moves his mass center to the opposite legs. 5) The robot detects this movement and starts walking. 6) The subject repeats steps 2 to 5. For this experiment, the thresholds to detect whether the conditions of the transition function are satisfied were expressed as αthres and βthres were set to 40 in the end. This value was decided after some trials. Also, the WSCM parameters were the same as those in Subsection 3.4. Fig. 8. shows the results of the experiment. The x-axis means time, and one time tick is 50ms. The y-axis denotes the sequentially stable criterion value, FSR sensor value, the angle of the robot motor axis, and the monitoring result of the robot state changing. At time tick 80, the decision information about whether or not the robot starts to walk, the stability criterion’s y value is
higher than the value of the desired y which is calculated by the value of x in the WSCM. However, the robot did not walk. The reason was that the robot was in a standing state. In a standing state, the robot doesn’t detect the user’s intent. The state of the robot was transited to a ready to walk state at time tick 130. In its ready to walk state, the robot detects the user’s intent. At the time tick 150, the value of y in the first paragraph is over the desired y, so the robot started to walk. We can see this behavior in Fig. 8(b) and Fig. 8(d). At time tick 230, the robot completed the right first step and the state of the robot changed to the walked state. In the walked state, the robot waits for a signal of continuous step or a final step. The subject conducted 6 steps after time tick 230 and conducted a final step in time tick 810. After the final step, the state of the robot changed to standing at time tick 910. 4.3. Discussion Two experiments were conducted to verify the proposed algorithm and WSCM modeling. From the first experiment, we can see that when non-handicapped people walk with crutches, they start their walking after moving their mass center to a stable area. This means that the WSCM corresponds to the concept of the walking intent detection algorithm by developing a similar walking method for paraplegic patients compared to the walking of non-handicapped people. However, there are some cases in which the subjects start walking before their mass center is not in a stable area. These cases also occur when the subject has changed. We think that this phenomenon is due to the fact that the human body is not symmetric. However, this property is not reflected in the device used in this experiment. The results of the second experiment with the paraplegic patient showed the effectiveness of the advanced algorithm. The paraplegic patient commented that the robot reacted quickly and precisely to his intent. Comparing the proposed algorithm with a user’s intent detection method of HAL, the difference of the modeling of the unit-walking behaviors makes it impossible to compare directly. However, the presented algorithm is superior to that of HAL in two ways: 1) Time to detect user intent. 2) Clarity of time and method for detecting final step intent.
Fig. 7. Experiment of walking with a complete paraplegic pateint.
First, to detect a user’s walking intent, HAL uses a threshold method and sets this threshold to 50N [11]. This threshold was acquired experimentally. By criticizing our WSCM, this means that HAL started to swing when the value of x was near 1. It is sufficient for safe walking, but slower than our algorithm by about 0.5 sec. This can cause user discomfort. Also the method of the determining threshold is ambiguous. Secondly, HAL uses a 5 second interval method to detect the user’s final step intention [11]. This method makes it impossible to react quickly and consumes a user’s stamina because of the need for an unstable posture.
Walking Intent Detection Algorithm for Paraplegic Patients Using a Robotic Exoskeleton Walking Assistant with Crutches 1
961
X Y Desired Y
0.8
no ir eit 0.6 r C yit li 0.4 ba tS 0.2 0 0
100
200
300
400
500 Time [50ms]
600
700
800
900
1000
(a) Walking stability criterion. 1000
Right Foot FSRs Left Foot FSRs Right Crutch FSR Left Crutch FSR
800
eu 600 la V R S 400 F 200 0 0
100
200
300
400
500 Time [50ms]
600
700
800
900
1000
(b) FSR values. 70
Right Hip Right Knee Left Hip Left Knee
60 50
g)e 40 (de 30 lg nA 20 10 0 -10 0
100
200
300
400
500 Time [50ms]
600
700
800
900
1000
(c) Joint angles (deg). 7
Walking 6 Completed Final Step 5 Walking
states
4 Walking
se ta 3 tS
Walked First Step Walking 2
1 Ready to Walk 0 Standing -1 0
100
200
300
400
500 Time [50ms]
600
700
800
900
1000
(d) Robot state. Fig. 8. Experiment results verifying the proposed walking intent detection algorithm with the paraplegic patient. 5. CONCLUSIONS [1] This paper presented a user walking intent detection algorithm for paraplegic patients with robotic exoskeleton walking assistants with crutches. The walking intent detection problem is the problem of giving the robot abilities used to recognize a correct time to start walking, and a proper unit walking behavior at that time from user intent. To give these abilities, we developed an algorithm derived from MARB and made the WSCM. The proposed algorithm was implemented using the ROBIN robotic exoskeleton, and experiments verified its effectiveness with a paraplegic patient. In the future, we will develop an algorithm used to estimate the position of crutch tips to apply the proposed algorithm in unstructured environments.
[2]
[3]
[4]
REFERENCES J. L. Pons, Wearable Robots: Biomechatronic exoskeletons, John Wiley & Sons Ltd., London, 2008. A. M. Dollar and H. Herr, “Lower extremity exoskeletons and active orthoses: challenges and state-of-the-art,” IEEE Trans. on Robotics, vol. 24, no. 1, pp. 144-158, 2008. J. E. Pratt, B. T. Krupp, C. J. Morse, and S. H. Collins, “The roboknee: an exoskeleton for enhancing strength and endurance during walking,” Proc. IEEE Int. Conf. on Robotics and Automation, pp. 2430-2435, 2004. J. Okamura, H. Tanaka, and Y. Sankai, “EMGbased prototype powered assistive system for walking aid,” Proc. Asian Symp. on Industrial Automation and Robotics, pp. 229-234, 1999.
962
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14] [15]
[16]
[17]
[18]
[19]
Junyoung Jung, Inhun Jang, Robert Riener, and Hyunsub Park
S. Lee and Y. Sankai, “Power assist control for walking aid with HAL-3 based on EMG and impedance adjustment around knee joint,” Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 1499-1504, 2002. T. Nakai, S. Lee, H. Kawamoto, and Y. Sankai, “Development of powered assistive leg for walking aid using EMG and Linux,” Proc. Asian Symp. on Industrial Automation and Robotics, pp. 295-299, 2001. H. Kazerooni, J.-L. Racine, L. Huang, and R. Steger, “On the control of the Berkeley lower extremity exoskeleton (BLEEX),” Proc. IEEE Int. Conf. on Robotics and Automation, pp. 4353-4360, 2005. H. Kazerooni, R. Steger, and L. Huang, “Hybrid control of the Berkeley lower extremity exoskeleton,” Int. J. Robotics Res. 25, pp. 561-573, 2006. R. Steger, S. H. Kim, and H. Kazerooni, “Control scheme and networked control architecture for the Berkeley lower extremity exoskeleton,” Proc. IEEE Int. Conf. on Robotics and Automation, pp. 34693467, 2006. H. Kawamoto, T. Hayashi, T. Sakurai, K. Equchi and Y. Sankai, “Development of single leg version of HAL for hemiplegia,” Proc. of Int. Conf. of the IEEE EMBS, pp. 5038-5043, 2009. K. Suzuki, Y. Kawamura, T. Hayashi, T. Sakurai, Y. Hasegawa, and Y. Sankai, “Intention-based walking support for paraplegia patient,” Proc. of IEEE Int. Conf. on Systems, Man, and Cybernetics, pp. 27072713, 2005. K. Suzuki, G. Mito, H. Kawamoto, Y. Hasegawa, and Y. Sankai, “Intention-based walking support for paraplegia patients with robot suit HAL,” Advanced Robotics, vol. 21, no. 12, pp. 1441-1469, 2007. A. Tsukahara, Y. Hasegawa, and Y. Sankai, “Standing-Up motion support for paraplegic patient with robot suit HAL,” Int. Conf. on Rehabilitation Robotics, pp. 211-217, 2009. A. Goffer and K. Tivon, “Gait-locomotor apparatus,” European Patent 1 260 201 A1, Nov. 27, 2002. A. Goffer, K. Tivon, C. Zilberstein, and Z. Yaacov, “Locomotion assisting device and method,” United States Patent, US 2010/0094188, 2010. N. K. Rana, “Application of force sensing resistor (FSR) in design of pressure scanning system for plantar pressure measurement,” Proc. of the 2nd Int. Conf. on Computer and Electrical Engineering, pp. 678-685, 2009. G. V. Merrett, M. A. Ettabib, C. Peters, G. Hallett, and N. M. White, “Augmenting forearm crutches with wireless sensors for lower limb rehabilitation,” Meas. Sci. Technol., vol. 21, no. 12, 2010. L. Konig, S. Mostaghim, and H. Schmeck, “Online and onboard evolution of robotic behavior using finite state machines,” Proc. of the 8th Int. Conf. on Autonomous Agents and Multiagent Systems, pp. 1325-1326, 2009. J. Perry, Gait Analysis - Normal and Pathological Function, SLACK, New Jersey, 1992.
Junyoung Jung received his B.S. degree in Computer Engineering from Sejong University in 2006, and became a Ph.D. Candidate in 2011 in the Department of Intelligent Robotics, University of Science of Technology. His research interests include exoskeleton robots, rehabilitation robots, artificial intelligence and machine learning. Inhun Jang received his M.S. and Ph.D. degrees in Electrical and Electronics Engineering from Chung-ang University in 1999 and in 2010, respectively. Since 2009, he has been at KITECH, where he is currently a senior researcher in the Robot Convergence R&D Group. His research interests include rehabilitation robots and machine learning based on probabilistic inference. Robert Riener received his Dipl. Ing. degree in mechanical engineering and his Ph.D. degree from the Technische Universitat Munchen, Munich, Germany, in 1993 and 1997, respectively. In 1993, he joined the Institute of Automatic Control Engineering, where he has pursued research into modeling and control of neuroprostheses. After postdoctoral work at the Centro di Bioingegneria, Politecnicodi Milano, Milan, Italy, from 1998 to 1999, he returned to the Technische UniversitatMünchen, where he coordinated several research projects and finished his Habilitation in the field of Biomechatronics about multimodal VR applied to medicine in January 2003. In May 2003, he obtained an assistant professorship for Rehabilitation Engineering at the Automatic Control Laboratory, and in June 2010 a full professorship in Sensory-Motor Systems, both at ETH Zurich, Switzerland. As he holds a doubleprofessorship with the University of Zurich, he is also active in the Spinal Cord Injury Center of the University Hospital Balgrist (Medical Faculty). His research interests involve biomechanics, virtual reality, and rehabilitation robotics. Hyunsub Park received his B.S. degree in Mechanical Design Engineering from Seoul National University in 1984, and his Ph.D. in Production Engineering from KAIST in 1989. Now, he works for KITECH in the Robot Convergence R&D Group and his research interests include rehabilitation robots and care robots.