1. Introduction
An autonomous mobile robot(AMR) is intelligent robot that performs a given work with sensors by identifying the surrounded environment and reacts on the state of condition by itself instead of human. Unlike general manipulator in a fixed working space [1], it is required intelligent processing in a flexible and variable working environment. And studies on a fuzzy-rule based control are attractive in the field of autonomous mobile robot. Robust behavior in autonomous robots requires that uncertainty be accommodated by the robot system [2].
This paper describes a hierarchical behavior-based control architecture. It is organized as a hierarchy of fuzzy rule bases, allowing for the distribution of intelligence across specialized fuzzy behaviors. A fuzzy coordination scheme is also described that employs weighted decision making based on contextual behavior activation. Performance is demonstrated by simulation highlighting interesting aspects of the decision making process which arise from behavior interaction [3,4].
In this paper, we propose a new approach for designing a fuzzy controller to enhance the ability of mobile robots to respond to dynamic environments. Obstacle avoidance and trajectory planning are two distinct tasks, making the control structure for pursuing both goals simultaneously complex. One goal is represented as a cost function, while the other goal is appropriately weighted and combined based on the robot's status and the environment. Fuzzy rules, derived from expert knowledge, are used to determine the weights. The proposed control algorithm includes fuzzy logic for goal approach, obstacle avoidance, and weight determination. And also, we utilize a sensor fusion method to calculate the distance and width of obstacles, enabling the robot to avoid them during navigation [5,6].
2. Fuzzy Controller Design
The proposed fuzzy controller is shown as follows. We define three major navigation goals, i.e., target orientation, obstacle avoidance and rotation movement; represent each goal as a cost function. Note that the fusion process has a structure of forming a cost function by combining several cost functions using weights. In this fusion process, we infer each weight of command by the fuzzy algorithm that is a typical artificial intelligent scheme. With the proposed method, the AMR robot navigates intelligently by varying the weights depending on the environment, and selects a final command to keep the minimum variation of orientation and velocity according to the cost function.
Fig. 1. Overall structure of navigation algorithm
2.1 Command for towarding goal
The orientation command of mobile robot is generated as the nearest direction to the target point. The command is defined as the distance to the target point when the robot moves present with the orientation, θ and the velocity, v. Therefore, a cost function is defined as Eq. (1).
Ed(θ) = xd - (xc + v Δt cosθ)2 + yd - (yc + v Δt sinθ)2 (1)
where, v is vmax - k|θc - θ| and k represents the reduction ratio of rotational movement. and xc, yc and xd, yd are the coordinates of robot and destination in the global coordinate.
2.2 Command for avoiding obstacle
We use the method of representing the cost function for obstacle-avoidance as the shortest distance to an obstacle based upon the sensor data in the form of histogram. The distance information is represented as a form of second order energy, and represented as a cost function by inspecting it about all θ as shown in Eq. (2).
E0(θ) = d2sensor(θ) (2)
To navigate in a dynamic environment to the goal, the mobile robot should recognize the dynamic variation and react to it. For this, the mobile robot extracts the variation of the surrounded environment by comparing the past and the present. For continuous movements of the robot, the transformation matrix of a past frame w.r.t the present frame should be defined clearly.
In Fig. 2, a vector, P⍺n-1 is defined as a position vector of the mobile robot w.r.t the {n-1} frame and P⍺n is defined as a vector w.r.t the {n} frame. Then, we obtain the relation between P⍺n-1 and P⍺n as follow.
Fig. 2. Transformation of frame
P⍺n = Rnn-1(P⍺n-1 - dnn-1) (3)
Here, Rnn-1 is a rotation matrix from {n-1} to {n} frame defined as Eq. (4), and dnn-1 is a translation matrix from {n-1} frame to {n} frame as shown in Eq. (5).
\(\begin{align}R_{n-1}^{n}=\left[\begin{array}{cc}\cos (\Delta \theta) & \sin (\Delta \theta) \\ -\sin (\Delta \theta) & \cos (\Delta \theta)\end{array}\right]\end{align}\) (4)
where Δθ = θn - θn-1
\(\begin{align}d_{n-1}^{n}=\left[\begin{array}{cc}\cos \theta_{n-1} & \sin \theta_{n-1} \\ -\sin \theta_{n-1} & \cos \theta_{n-1}\end{array}\right]\left[\begin{array}{l}x_{n}-x_{n-1} \\ y_{n}-y_{n-1}\end{array}\right]\end{align}\) (5)
According to the Eq. (3), the environment information measured in the {n-1} frame can be represented w.r.t the {n} frame. Thus, if Wn-1, and Wn are the environment information in the polar coordinates measured in {n-1} and {n} frames, respectively, we can represent Wn-1 w.r.t the {n} frame, and extract the moving object by the Eq. (6) in the {n} frame.
movement = nWn-1 ∙ (nWn-1 - Wn) (6)
where, nWn-1 represents Wn-1 transformed into the {n} frame.
3.1 Command for minimizing rotation
Minimizing rotational movement aims to rotate wheels smoothly by restraining the rapid motion. The cost function is defined as minimum at the present orientation and is defined as a second order function in terms of the rotation angle, θ as Eq. (7).
Er(θ) = (θc - θ)2θc : present angle (7)
The command represented as the cost function has three different goals to be satisfied at the same time. Each goal differently contributes to the command by a different weight, as shown in Eq. (8).
E(θ) = w1Ed(θ) + w2Eo(θ) + w3Er(θ) (8)
3. Navigation Hierarchy by Fuzzy Logic
Primitive behaviors are low-level behaviors that typically take inputs from the robot's sensors and send outputs to the robot's actuator forming a nonlinear mapping between them. Composite behaviors map between sensory input and/or global constraints and the Degree of Applicability(DOA) of relevant primitive behaviors [6]. The DOA is the measure of the instantaneous level of activation of a behavior. The primitive behaviors are weighted by the DOA and aggregated to form composite behaviors. This is a general form of behavior fusion that can degenerate to behavior switching for DOA=0 or 1 [9].
At the Primitive level, behaviors are synthesized as fuzzy rule bases, i.e. collection of fuzzy if-then rules. Each behavior is encoded with a distinct control policy governed by fuzzy inference. If X and Y are input and output universes of discourse of a behavior with a rule-base of size n. the usual fuzzy if-then rule takes the following form:
IF x is Ai THEN y is Bi (9)
where, x and y represent input and output fuzzy linguistic variables, respectively, and Ai and Bi (i = 1 ⋯ n) are fuzzy subsets representing linguistic values of x and y. Typically, x refers to sensory data and y to actuator control signals. The antecedent and the consequent can also be a conjunction of propositions (e.g. IF xi is Ai,1 AND⋯xn is Ai,n THEN⋯).
At the composition level the DOA is evaluated using a fuzzy rule base in which global knowledge and constraints are incorporated. An activation level (threshold) at which rules become application is applied to the DOA giving the system more degrees of freedom. The DOA of each primitive behavior is specified in the consequent of applicability rules of the form:
IF x is Ai THEN ⍺j is Di (10)
Where x is typically a global constraint, ⍺j ∈ [0,1] is the DOA and Ai and Di, respectively are the fuzzy set of linguistic variable describing them. As in the former case the antecedent and consequent can also be a conjunction of propositions.
An action hierarchy for autonomous navigation might be organized as in Figure 3. It implies that goal-directed navigation can be decomposed as a behavioral function of Seek-goal and Follow-route. These behaviors can be further decomposed into the primitive behaviors shown, with dependencies indicated by the adjoining lines. Avoid-obstacle and Minimize-rotation are self-explanatory. The go-to behavior will direct a robot to navigate along a straight line trajectory to a goal position. The Minimize-rotation behavior implies one that can control a robot to rotate wheels smoothly by restraining the rapid motion [6,7].
Fig. 3. Hierarchical navigation structure of mobile robot
4. Experiments
After satisfactory simulation performance [10], the proposed navigation control system has been implemented and tested in a laboratory environment on a Jetbot AI vision robot based on the NVIDIA JETSON NANO board.
Jetbot AI vision robot equipped with a 3DOF camera PTZ and overall up and down lifting system(Fig. 4). This robot's controller, which is manufactured by NVIDIA, is a differentially driven platform configured with two drive crawler wheels and one swivel caster for balance. The proposed system was prepared using fuzzyTECH software, which generated Python code that was implemented on the Jetbot AI vision robot [8].
Fig. 4. Active camera system and JetBot AI vision robot
Fig. 5 illustrates the sensing coverage of both the vision and ultrasonic sensors utilized in this experiment. The ultrasonic sensor effectively detects obstacles within a range of 7 meters, while the vision system provides accurate obstacle detection within a specified range of 130 cm to 870 cm.
Fig. 5 illustrates the sensing coverage of both
Fig. 6(a) is the image used on the experiment; Fig. 6(b) is the values resulted from matching after image processing. Fig. 6. Shows that maximum matching error is within 4%. Therefore, It can be seen that above vision system is proper to apply to navigation.
Fig. 6. Experimental result of the vision system in area A
The mobile robot navigates along a corridor with 2m widths and with some obstacles as shown in Fig. 7(a). The real trace of the mobile robot is shown in Fig. 7(b). It demonstrates that the mobile robot avoids the obstacles intelligently and follows the corridor to the goal.
Fig. 7. Navigation of robot in corridor environment
5. Conclusions
A fuzzy control algorithm for both obstacle avoidance and path planning has been implemented in experiment so that it enables the mobile robot to reach to goal point under the unknown environments safely and autonomously.
And also, we showed an architecture for intelligent navigation of mobile robot which determine robot's behavior by arbitrating distributed control commands, seek goal, avoid obstacles, and maintain heading. Commands are arbitrated by endowing with weight value and combining them, and weight values are given by fuzzy inference method. Arbitrating command allows multiple goals and constraints to be considered simultaneously. To show the efficiency of proposed method, real experiments are performed.
To show the efficiency of proposed method, real experiments are performed. The experimental results show that the mobile robot can navigate to the goal point safely under unknown environments and also can avoid moving obstacles autonomously.
Our ongoing research endeavors include the validation of the more complex sets of behaviors, both in simulation and on an actual mobile robot. Further researches on the prediction algorithm of the obstacles and on the robustness of performance are required.
References
- T.S. Jin, J.M. Lee, and H. Hashimoto, "Position estimation of mobile robot using images of moving target in AI Space with distributed sensors," Advanced Robotics, vol.20, no.6, pp.737-762, (2006). https://doi.org/10.1163/156855306777361604
- Y. Mizoguchi, D. Hamada, et al. "Image-based navigation of Small-size Autonomous Underwater Vehicle 'Kyubic'," in International Underwater Robot Competition, ALife Robotics Corporation Ltd. pp. 473–476, (2023).
- Pavel, M.D.; Roșioru, S.; Arghira, N.; Stamatescu, G. Control of Open Mobile Robotic Platform Using Deep Reinforcement Learning. In Service Oriented, Holonic and Multi-Agent Manufacturing Systems for Industry of the Future: Proceedings of SOHOMA 2022; Springer: Vol. 1083, pp. 368–379, (2023).
- Cerquitelli, T.; Ventura, F.; Apiletti, D.; Baralis, E.; Macii, E.; Poncino, M. Enhancing manufacturing intelligence through an unsupervised data-driven methodology for cyclic industrial processes. Expert Syst. Appl. (2021).
- Cruz Ulloa, C.; Garcia, M.; del Cerro, J.; Barrientos, A. Deep Learning for Victims Detection from Virtual and Real Search and Rescue Environments. In ROBOT2022: Fifth Iberian Robotics Conference: Advances in Robotics; Springer: Berlin/Heidelberg, Germany, 2023; vol. 590, pp. 3–13, (2023).
- Cordeiro, A.; Rocha, L.; Costa, C.; Silva, M. Object Segmentation for Bin Picking Using Deep Learning. In ROBOT2022: Fifth Iberian Robotics Conference: Advances in Robotics; Lecture Notes in Networks and Systems; Springer: Berlin/Heidelberg, vol. 590, pp. 53–66, (2023).
- Tae-Seok Jin, "LQ control by linear model of Inverted Pendulum Robot for Robust Human Tracking," Journal of Korean Society of Industry Convergence, Vol.23, No.1, pp.49-55, 2020.
- Crawford, B.; Sourki, R.; Khayyam, H.; Milani, A.S. A machine learning framework with dataset-knowledgeability pre-assessment and a local decision-boundary crispness score: An industry 4.0-based case study on composite autoclave manufacturing. Comput. Ind. (2021).
- Yin, X.; Du, J.; Geng, D.; Jin, C. Development of an automatically guided rice transplanter using RTK-GNSS and IMU. IFAC-PapersOnLine, 51, 374–378, (2018). https://doi.org/10.1016/j.ifacol.2018.08.193
- Wu, C.; Wu, J.; Wen, L.; Chen, Z.; Yang, W.; Zhai, W. Variable curvature path tracking control for the automatic navigation of tractors. Trans. CSAE, 38, 1–7, (2022).