Author - Muhammad Bilal

16 Apr

Introduction to Robot Learning from Demonstration

Robots are no longer limited to factory floors. They are increasingly used in healthcare, homes, and everyday environments to assist people with a wide range of tasks. However, programming robots is still difficult. Most robotic systems require expert knowledge and complex coding, which makes them hard to use for non-experts. This creates a major barrier for wider adoption.

To overcome this challenge, researchers developed a method called Learning from Demonstration (LfD). Instead of writing code, users teach robots new skills by showing them how to perform a task. This makes robot programming more intuitive and helps make robotics accessible to a broader range of users.

Limitations of Traditional Approaches

Traditional robot programming requires users to carefully define every action step using code. This process is time-consuming and requires specialized expertise.

Motion planning methods reduce the need to specify exact trajectories, but they still require precise instructions such as goal positions and waypoints. These programs are often rigid and need to be rewritten when the environment changes.

Reinforcement learning offers more flexibility, but it introduces its own challenges. Designing a suitable reward function usually requires deep domain knowledge, and training often takes a long time. This makes reinforcement learning difficult to apply in real-world settings.

Because of these limitations, Learning from Demonstration becomes especially attractive—particularly when a task is hard to describe using rules or rewards, or when manual programming is impractical.

Ways Humans Can Demonstrate Tasks

There are several ways users can demonstrate tasks to a robot, depending on how they interact with the system.

Modalities of Demonstrations (Image Source: Robotics 2024)
Kinesthetic Teaching

Kinesthetic teaching allows users to physically guide the robot by moving its joints directly. This method does not require extra sensors or equipment, making it simple and intuitive.

It is particularly well suited for robotic manipulators such as the KUKA iiwa (7-DoF) and Franka Emika (7-DoF) robots, where users can easily demonstrate precise motions by hand.

Teleoperation

In teleoperation, users control the robot using a joystick or remote controller. Typically, the user controls only the robot’s end-effector, while the robot computes the joint movements using inverse kinematics.

This approach is useful for robots with many joints, such as humanoid or highly redundant robots, where controlling each joint directly would be overwhelming. However, computing inverse kinematics for high-degree-of-freedom robots is mathematically challenging.

Passive Observation

In passive observation, users perform a task naturally without directly interacting with the robot. Human motion is captured using tools such as 3D cameras or motion-capture systems.

This method is easy for humans, but difficult for robots. Extracting meaningful task information from raw human motion data and transferring it to a robot is a complex problem.

Key Goals of Learning from Demonstration

A central goal of LfD is to enable robots to learn tasks quickly and reliably from only a small number of demonstrations. Achieving this requires improvement in two main areas:

  1. Learning algorithms – how effectively the robot can learn from the data
  2. Demonstration quality – how informative and well-structured the human demonstrations are

Improving both aspects is essential for building robotic systems that are easy to teach, adaptable, and practical for real-world use.


8 Apr

Singularity and Dynamic Obstacle Avoidance for a 3R Planar Robot

This project focuses on redundancy resolution for a 3R planar robotic manipulator. In simple terms, redundancy resolution means choosing the best joint configuration when multiple solutions can achieve the same end-effector motion. The choice depends on the task objective.

In this work, two key objectives are considered: 1) Singularity avoidance, and 2) Dynamic obstacle avoidance

Singularity Avoidance

A singularity occurs when the robot loses freedom of motion in certain directions, making control unstable or inefficient. To address this, a singularity avoidance algorithm is implemented and tested in simulation.

The robot is commanded to follow a desired trajectory while also keeping its joint configuration away from singular positions. To evaluate the effectiveness of the approach, two simulations are compared:

The robot follows the desired trajectory without singularity avoidance.
The robot follows the same trajectory with singularity avoidance enabled.

The results are visualised using a manipulability ellipsoid. As the robot approaches a singular configuration, the ellipsoid area shrinks. A larger ellipsoid indicates higher manipulability, meaning the robot is far from a singularity. At a fully extended position, a hard singularity cannot be avoided, and the ellipsoid collapses into a straight line.

Dynamic Obstacle Avoidance

In addition to singularity handling, a dynamic obstacle avoidance algorithm is implemented. The robot is required to avoid a moving obstacle while continuing to follow the desired trajectory.

The simulation demonstrates that the robot can successfully adjust its motion in real time to avoid collisions, without deviating significantly from the task path.

The robot follows the desired trajectory without obstacle avoidance.
The robot follows the desired trajectory with obstacle avoidance.


28 Mar

Position-Based Impedance (Admittance) Control for Force Tracking on Even and Uneven Surfaces

Industrial robots are traditionally designed for tasks that do not require physical contact with their environment. Because of this, most industrial robots rely on position control rather than force control. Torque-based impedance control is often difficult to apply in these systems because it requires an accurate dynamic model of the robot, which can be complex and hard to compute. To address this issue, this project uses position-based impedance control. This approach allows the robot to behave like a virtual mass–spring–damper system, enabling force tracking during contact tasks without explicitly deriving the robot’s dynamic model.

As robots are increasingly used in tasks that involve direct physical interaction, safe and accurate force control has become essential.

Force Tracking on an Even Surface

Position-based impedance control is implemented on a 3R planar robot to track contact force on a flat surface. The simulation starts with the robot operating in free space and then transitions into contact with the surface, as shown in the simulation video.

Force Tracking on an Uneven Surface

For uneven surfaces, the same control strategy is applied, but the environment stiffness changes over time. This allows the controller to adapt to variations in surface properties. The stiffness profile is defined as:

  • Ke = 3500 N/m for 0 < t < 2.5 s
  • Ke = 5000 N/m for 2.5 < t < 3 s

The simulation demonstrates stable force tracking under changing surface conditions.

Force tracking on even surface with constant stiffness
Force tracking on uneven surface with varying stiffness