Robotic Arm

What does a Robotic Arm Look Like?

  • Gears
  • End Effectors
  • Anything Metal

Not really: Since Robotic Arm is invented by Humans, we get to decide its function and look.

Biomimicry:

Before this project, I’ve started to think about biomimicry—how nature’s mechanisms can inspire robotic systems. In biological organisms, movement is rarely rigid or purely mechanical. Muscles, tendons, and joints interact fluidly to adapt to the environment with precision and efficiency.

Spiral
Spirals

Biomimetic principles could influence several aspects of my robotic arm project:

Adaptive Gripping: Studying how human hands or tentacles conform to objects could inspire soft grippers for delicate controls

Energy Efficiency: Many biological systems use minimal energy through passive dynamics and optimized motion paths—concepts that could inform smoother, more efficient trajectories for robotic joints.

! Remember: Nature has 3 Billion Years to Evolve. The History of Robotic Arm is less than 80 Years!

Perception and Coordination: Just as animals integrate sensory feedback for balance and control, future versions of my project could use vision and tactile data for real-time correction and adaptability.

Getting Started

I began my exploration with the UFactory xArm using its premade GUI software. This interface allowed me to understand the basics of motion control—setting joint angles, defining waypoints, and executing movements through a graphical interface. It was a straightforward way to visualize how different axes interact and to see how the robot interprets spatial positioning.

Also the GUI instructs us on How to Type in Gcode: Which is largely use in CNC and 3D printing.
E.g.:

G-code to draw a 100mm x 100mm x 100mm cube

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
G21 ; Set units to millimeters
G90 ; Absolute positioning
G28 ; Home all axes

; Move to starting position (bottom corner of cube)
G0 Z10 ; Lift Z for safety
G0 X0 Y0 ; Move to origin
G0 Z0 ; Lower to starting position

; Draw bottom square (Z=0)
G1 X100 Y0 F1000 ; Edge 1: (0,0,0) to (100,0,0)
G1 X100 Y100 F1000 ; Edge 2: (100,0,0) to (100,100,0)
G1 X0 Y100 F1000 ; Edge 3: (100,100,0) to (0,100,0)
G1 X0 Y0 F1000 ; Edge 4: (0,100,0) to (0,0,0)

; Draw vertical edges (Z=0 to Z=100)
G1 X0 Y0 Z100 F1000 ; Edge 5: (0,0,0) to (0,0,100)
G1 X100 Y0 Z100 F1000 ; Edge 6: (100,0,0) to (100,0,100)
G1 X100 Y0 Z0 F1000 ; Return to corner
G1 X100 Y0 Z100 F1000 ; Back up
G1 X100 Y100 Z100 F1000 ; Edge 7: (100,100,0) to (100,100,100)
G1 X100 Y100 Z0 F1000 ; Return to corner
G1 X100 Y100 Z100 F1000 ; Back up
G1 X0 Y100 Z100 F1000 ; Edge 8: (0,100,0) to (0,100,100)
G1 X0 Y100 Z0 F1000 ; Return to corner
G1 X0 Y100 Z100 F1000 ; Back up

; Draw top square (Z=100)
G1 X0 Y0 Z100 F1000 ; Move to top corner
G1 X100 Y0 Z100 F1000 ; Edge 9: (0,0,100) to (100,0,100)
G1 X100 Y100 Z100 F1000 ; Edge 10: (100,0,100) to (100,100,100)
G1 X0 Y100 Z100 F1000 ; Edge 11: (100,100,100) to (0,100,100)
G1 X0 Y0 Z100 F1000 ; Edge 12: (0,100,100) to (0,0,100)

; Return to safe position
G0 Z120 ; Lift Z
G0 X0 Y0 ; Return to origin

Robotic Arm Setup

Moving to Code: Python SDK

Once I was comfortable with the GUI, I transitioned to using the Python SDK to achieve more precise and programmable control. My goal was to make the arm stack Jenga blocks autonomously.

https://github.com/xArm-Developer/xArm-Python-SDK

I learned everything based on their instructions. But, I think the google translation wasn’t very friendly towards noobies.

I wrote a Python script to move the arm to predetermined X, Y, Z coordinates and roll, pitch, yaw orientations relative to the table. Using the move_gohome() and set_position() methods from the SDK, I was able to build a repeatable and consistent stacking routine.

1
2
3
4
5
6
7
8
arm.motion_enable(enable=True)
arm.set_mode(0)
arm.set_state(state=0)

arm.move_gohome(wait=True)

arm.set_position(x=200, y=200, z=0, roll=-200, pitch=0, yaw=0)
print(arm.get_position())

This stage helped me understand how Cartesian coordinates map to real-world movement and how to fine-tune motion parameters for accuracy and stability.

Demonstration

Here’s a Funny Video of me STacking blocks:

Next Steps: Vision and AprilTags

Stack

For the next phase of this project, I plan to integrate computer vision to enable the arm to perceive its environment. Specifically, I want the system to identify AprilTags on the workspace and calculate their positions to determine where to place or pick up objects.

The Concept of April tag was inspired by this year’s FTC game.

Eng

Here’s a minimal example of my initial setup for tag detection using pupil_apriltags and OpenCV:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
from pupil_apriltags import Detector
import cv2

# Load and preprocess image
img = cv2.imread("six.png")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# Initialize detector
at_detector = Detector(
families="tag16h5",
nthreads=1,
quad_decimate=1.0,
quad_sigma=0.0,
refine_edges=1,
decode_sharpening=0.25,
debug=0
)

# Detect and print results
print(at_detector.detect(gray))

Once the tag positions are identified, I will use geometric transformations and calibration math to map the detected coordinates into the robot’s coordinate system. This will allow dynamic, vision-based motion instead of relying solely on predefined positions.

Pictures:

Top
Model
Tower