Muhammad Bilal
Robotics & AI Researcher
About
I am a graduate student at the University of Melbourne, Australia, with a research focus on robotics and human-centered intelligent systems. My work explores how robots can learn effectively from human input and interact with novices in intuitive and meaningful ways.
Previously, I served as a Research Assistant and later as Team Lead at the Human-Centered Robotics Lab, National Center of Robotics and Automation (NCRA), Pakistan.
Education
University of Engineering & Technology, Lahore, Pakistan.
News
Feb 02, 2026 — Successfully completed PhD Final Seminar.
Dec 18, 2025 — Delivered an invited talk at Monash University (thanks to Dr. Michael for the invitation).
Dec 02, 2025 — Our paper has been accepted for publication at HRI 2026.
Nov 21, 2025 — Presented a robot demonstration at the 70th CIS School Anniversary.
Apr 20, 2025 — Successfully completed the second-year PhD progress review with satisfactory outcome.
Mar 13, 2025 — Attended the HRI 2025 Conference in Melbourne.
Jun 30, 2024 — Our paper has been accepted for publication at IROS 2024.
Apr 20, 2024 — Passed PhD Confirmation.
Selected Publications
Conference Proceedings:
- Muhammad Bilal, Tharaka Ratnayake, D. Antony Chacon, Nir Lipovetzky, Denny Oetomo, Wafa Johal. “Design and Evaluation of AR-Based Real-Time Feedback System for Kinesthetic Robot Teaching” ACM Designing Interactive Systems 2026.
[In Review] - Muhammad Bilal, D. Antony Chacon, Nir Lipovetzky, Denny Oetomo, Wafa Johal, “Investigating the Impact of Robot Degree of Redundancy on Learning from Demonstration,” In 2026 21st ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2026, doi: 10.1145/3757279.3785606.
[In Press] - Muhammad Bilal, Nir Lipovetzky, Denny Oetomo and Wafa Johal, “Beyond Success: Quantifying Demonstration Quality in Learning from Demonstration,” In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2024, pp. 5120-5127, doi: 10.1109/IROS58592.2024.10802187.
- Syed Ali Huzaifa, Abdurrehman Akhtar, Meeran Ali Khan, and Muhammad Bilal, “Detection of Parkinson’s Tremor in Real Time Using Accelerometers,” In IEEE 7th International Conference on Smart Instrumentation, Measurement and Applications (ICSIMA), 2021, pp. 5-9, doi: 10.1109/ICSIMA50015.2021.9526327.
- Muhammad Bilal, Ali Raza, Mohsin Rizwan et al. “Towards Rehabilitation of Mughal Era Historical Places using 7 DOF Robotic Manipulator,” In 2019 International Conference on Robotics and Automation in Industry (ICRAI), 2019, pp. 1-6, doi: 10.1109/ICRAI47710.2019.8967360.
Journal Articles:
- Muhammad Bilal, Mohsin Rizwan et al. Design Optimization of Powered Ankle Prosthesis to Reduce Peak Power Requirement,” Science Progress, Vol. 105(3), pp. 1 – 16, 2022. 10.1177/00368504221117895.
- Muhammad Bilal, M. Nadeem Akram, and Moshin Rizwan, “Adaptive Variable Impedance Control for Multi-axis Force Tracking in Uncertain Environment Stiffness with Redundancy Exploitation,” Journal of Control Engineering and Applied Informatics, Vol.24(2), pp. 35 – 45, 2022.
Research Projects
Singularity and Dynamic Obstacle Avoidance for a 3R Planar Robot
This project focuses on redundancy resolution for a 3R planar robotic manipulator. In simple terms, redundancy resolution means choosing the best joint configuration when multiple solutions can achieve the same end-effector motion. The choice depends on the task objective.
In this work, two key objectives are considered: 1) Singularity avoidance, and 2) Dynamic obstacle avoidance
Singularity Avoidance
A singularity occurs when the robot loses freedom of motion in certain directions, making control unstable or inefficient. To address this, a singularity avoidance algorithm is implemented and tested in simulation.
The robot is commanded to follow a desired trajectory while also keeping its joint configuration away from singular positions. To evaluate the effectiveness of the approach, two simulations are compared:
The results are visualised using a manipulability ellipsoid. As the robot approaches a singular configuration, the ellipsoid area shrinks. A larger ellipsoid indicates higher manipulability, meaning the robot is far from a singularity. At a fully extended position, a hard singularity cannot be avoided, and the ellipsoid collapses into a straight line.
Dynamic Obstacle Avoidance
In addition to singularity handling, a dynamic obstacle avoidance algorithm is implemented. The robot is required to avoid a moving obstacle while continuing to follow the desired trajectory.
The simulation demonstrates that the robot can successfully adjust its motion in real time to avoid collisions, without deviating significantly from the task path.
Position-Based Impedance (Admittance) Control for Force Tracking on Even and Uneven Surfaces
Industrial robots are traditionally designed for tasks that do not require physical contact with their environment. Because of this, most industrial robots rely on position control rather than force control. Torque-based impedance control is often difficult to apply in these systems because it requires an accurate dynamic model of the robot, which can be complex and hard to compute. To address this issue, this project uses position-based impedance control. This approach allows the robot to behave like a virtual mass–spring–damper system, enabling force tracking during contact tasks without explicitly deriving the robot’s dynamic model.
As robots are increasingly used in tasks that involve direct physical interaction, safe and accurate force control has become essential.
Force Tracking on an Even Surface
Position-based impedance control is implemented on a 3R planar robot to track contact force on a flat surface. The simulation starts with the robot operating in free space and then transitions into contact with the surface, as shown in the simulation video.
Force Tracking on an Uneven Surface
For uneven surfaces, the same control strategy is applied, but the environment stiffness changes over time. This allows the controller to adapt to variations in surface properties. The stiffness profile is defined as:
- Ke = 3500 N/m for 0 < t < 2.5 s
- Ke = 5000 N/m for 2.5 < t < 3 s
The simulation demonstrates stable force tracking under changing surface conditions.
Articles
Introduction to Robot Learning from Demonstration
Robots are no longer limited to factory floors. They are increasingly used in healthcare, homes, and everyday environments to assist people with a wide range of tasks. However, programming robots is still difficult. Most robotic systems require expert knowledge and complex coding, which makes them hard to use for non-experts. This creates a major barrier for wider adoption.
To overcome this challenge, researchers developed a method called Learning from Demonstration (LfD). Instead of writing code, users teach robots new skills by showing them how to perform a task. This makes robot programming more intuitive and helps make robotics accessible to a broader range of users.
Limitations of Traditional Approaches
Traditional robot programming requires users to carefully define every action step using code. This process is time-consuming and requires specialized expertise.
Motion planning methods reduce the need to specify exact trajectories, but they still require precise instructions such as goal positions and waypoints. These programs are often rigid and need to be rewritten when the environment changes.
Reinforcement learning offers more flexibility, but it introduces its own challenges. Designing a suitable reward function usually requires deep domain knowledge, and training often takes a long time. This makes reinforcement learning difficult to apply in real-world settings.
Because of these limitations, Learning from Demonstration becomes especially attractive—particularly when a task is hard to describe using rules or rewards, or when manual programming is impractical.
Ways Humans Can Demonstrate Tasks
There are several ways users can demonstrate tasks to a robot, depending on how they interact with the system.

Kinesthetic Teaching
Kinesthetic teaching allows users to physically guide the robot by moving its joints directly. This method does not require extra sensors or equipment, making it simple and intuitive.
It is particularly well suited for robotic manipulators such as the KUKA iiwa (7-DoF) and Franka Emika (7-DoF) robots, where users can easily demonstrate precise motions by hand.
Teleoperation
In teleoperation, users control the robot using a joystick or remote controller. Typically, the user controls only the robot’s end-effector, while the robot computes the joint movements using inverse kinematics.
This approach is useful for robots with many joints, such as humanoid or highly redundant robots, where controlling each joint directly would be overwhelming. However, computing inverse kinematics for high-degree-of-freedom robots is mathematically challenging.
Passive Observation
In passive observation, users perform a task naturally without directly interacting with the robot. Human motion is captured using tools such as 3D cameras or motion-capture systems.
This method is easy for humans, but difficult for robots. Extracting meaningful task information from raw human motion data and transferring it to a robot is a complex problem.
Key Goals of Learning from Demonstration
A central goal of LfD is to enable robots to learn tasks quickly and reliably from only a small number of demonstrations. Achieving this requires improvement in two main areas:
- Learning algorithms – how effectively the robot can learn from the data
- Demonstration quality – how informative and well-structured the human demonstrations are
Improving both aspects is essential for building robotic systems that are easy to teach, adaptable, and practical for real-world use.

