Once I did the face detection project, it made me easier to understand gesture recognition, you can do it too. It gives me more solid fundamentals to start with deep learning studying, I want to create my objection recognition model and have the drone fly with this. Before that, I like to try body detection and swap fly as my next project. OK, let’s see how we make this happen in our drone.
Capture video frame from the drone
Use a hand detection tool to identify hands from the frame and identify the left hand.
Based on the hand-detected information (including wrist, finger, knuckles & phalanges) and the logic, we will achieve gesture control.
Understanding your hand
First of all, let’s understand our hands first… XD
I got this picture from sketchymedicine.com, it contains a lot of medical sketches and detailed information, very cool website.
Program with MediaPipe
See below for the full program
from djitellopy import Telloimport cv2import mediapipe as mpimport threadingimport mathimport loggingimport time# Assign tello to the Tello class and set the information to error onlytello = Tello()tello.LOGGER.setLevel(logging.ERROR)#Ignore INFO from Tellofly =False#For debuggin purpose# Assign the MediaPipe hands detection solution to mpHands and define the confidence levelmpHands = mp.solutions.handshands = mpHands.Hands(min_detection_confidence=0.8,min_tracking_confidence=0.8)# When we detect the hand, we can use mp.solution to plot the location and connectionmpDraw = mp.solutions.drawing_utilsdefhand_detection(tello):whileTrue:global gesture# Read the frame from Tello frame = tello.get_frame_read().frame frame = cv2.flip(frame,1)# Call hands from MediaPipe Solution for the hand detction, need to ensure the frame is RGB result = hands.process(frame)# Read frame width & height instead of using fixed number 960 & 720 frame_height = frame.shape frame_width = frame.shape my_hand =if result.multi_hand_landmarks:for handlms, handside inzip(result.multi_hand_landmarks, result.multi_handedness):if handside.classification.label =='Right':# We will skip the right hand informationcontinue# With mp.solutions.drawing_utils, plot the landmark location and connect them with define style mpDraw.draw_landmarks(frame, handlms, mpHands.HAND_CONNECTIONS,\ mp.solutions.drawing_styles.get_default_hand_landmarks_style(),\ mp.solutions.drawing_styles.get_default_hand_connections_style())# Convert all the hand information from a ratio into actual position according to the frame size.for i, landmark inenumerate(handlms.landmark): x =int(landmark.x * frame_width) y =int(landmark.y * frame_height) my_hand.append((x, y))# Capture all the landmarks position and distance into hand# wrist = 0 # thumb = 1 - 4# index = 5 - 8# middle = 9 - 12# ring = 13 - 16# little = 17 - 20# Setup left hand control with the pre-defined logic. # Besides thumb, we use finger tip y position compare with knuckle y position as an indicator# Thumb use the x position as the comparison.# Stop, a fist# Land, open hand# Right, only thumb open# Left, only little finger open# Up, only index finger open# Down, both thumb and index finger open# Come, both index and middle fingger open# Away, both index, middle and ring finger open finger_on =if my_hand> my_hand: finger_on.append(1)else: finger_on.append(0)for i inrange(1,5):if my_hand[4+ i*4]< my_hand[2+ i*4]: finger_on.append(1)else: finger_on.append(0) gesture ='Unknown'ifsum(finger_on)==0: gesture ='Stop'elifsum(finger_on)==5: gesture ='Land'elifsum(finger_on)==1:if finger_on==1: gesture ='Right'elif finger_on==1: gesture ='Left'elif finger_on==1: gesture ='Up'elifsum(finger_on)==2:if finger_on== finger_on==1: gesture ='Down'elif finger_on== finger_on==1: gesture ='Come'elifsum(finger_on)==3and finger_on== finger_on== finger_on==1: gesture ='Away' cv2.putText(frame, gesture,(10,50), cv2.FONT_HERSHEY_SIMPLEX,2,(255,0,0),3) frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) cv2.imshow('Tello Video Stream', frame) cv2.waitKey(1)if gesture =='Landed':break######################### Start of the program ########################## Connect to the drone via WIFItello.connect()# Instrust Tello to start video stream and ensure first frame readtello.streamon()whileTrue: frame = tello.get_frame_read().frameif frame isnotNone:break# Start the hand detection thread when the drone is flyinggesture ='Unknown'video_thread = threading.Thread(target=hand_detection,args=(tello,),daemon=True)video_thread.start()# Take off the dronetime.sleep(1)if fly: tello.takeoff() tello.set_speed(10) time.sleep(2) tello.move_up(80)whileTrue: hV = dV = vV = rV =0if gesture =='Land':breakelif gesture =='Stop'or gesture =='Unknown': hV = dV = vV = rV =0elif gesture =='Right': hV =-15elif gesture =='Left': hV =15elif gesture =='Up': vV =20elif gesture =='Down': vV =-20elif gesture =='Come': dV =15elif gesture =='Away': dV =-15 tello.send_rc_control(hV, dV, vV, rV)# Landing the droneif fly: tello.land()gesture ='Landed'# Stop the video streamtello.streamoff()# Show the battery level before ending the programprint("Battery :", tello.get_battery())
# Assign the MediaPipe hands detection solution to mpHands and define the confidence levelmpHands = mp.solutions.handshands = mpHands.Hands(min_detection_confidence=0.8,min_tracking_confidence=0.8)
From MediaPipe, we are going to use MediaPipe Solution Hands for our hand detection, there are a few parameters that are important to us,
min_hand_detection_confidence – that’s the first step to identifying a hand in the single frame. At which score of detection is considered a success, the lower the number, the more detected objects can pass but the higher the chance of error. The higher the number, need a very high the score object, i.e. a very clear and precise hand image.
min_tracking_confidence – Once the hand is detected based on the min_hand_detection_confidence requirement, it will do the hand tracking with this number.
I found that the documentation on the MediaPipe Official Website may not be updated, and min_hand_presence_confidence and running_mode no longer exist. What I checked from hands.py. It spent me 30 minutes playing with the demo and reading information to understand the difference between min_hand_detection_confidence and min_hand_presence_confidence.
Just like what we did in face detection, we need to run the hand detection function in a thread (parallel processing) to capture and analyze the hand position, updating a global variable – gesture, so that the drone movement control can take corresponding actions according to this.
# Read the frame from Tello frame = tello.get_frame_read().frame frame = cv2.flip(frame,1)# Call hands from MediaPipe Solution for the hand detction, need to ensure the frame is RGB result = hands.process(frame)
Get the latest frame from the drone and flip it from the camera point of view to our point of view. Then, process the frame with hands, it was predefined in the beginning – line #11. Once the hand detection is done, the following information will be stored in result, it will contain,
result.multi_handedness – ‘Left’ or ‘Right’ hand
result.multi_hand_landmarks – an array containing 21 sets of data as show below, they call it landmarks, each landmark including the x, y & z position. But, the x and y coordinates are normalized to [0.0, 1.0] by the image width and height, respectively. The z coordinate represents the landmark depth, with the depth at the wrist being the origin.
result.multi_hand_world_landmarks – The 21 hand landmarks are also presented in world coordinates. Each landmark is composed of x, y, and z, representing real-world 3D coordinates in meters with the origin at the hand’s geometric center.
if result.multi_hand_landmarks:for handlms, handside inzip(result.multi_hand_landmarks, result.multi_handedness):if handside.classification.label =='Right':# We will skip the right hand informationcontinue
If hand(s) are detected, the result will contain the necessary data. Since we only need to use the left hand to control the drone, we will skip reading the Right hand data.
Then, we use the drawing_utils from MediaPipe to highlight the landmarks and connect them.
# Convert all the hand information from a ratio into actual position according to the frame size.for i, landmark inenumerate(handlms.landmark): x =int(landmark.x * frame_width) y =int(landmark.y * frame_height) my_hand.append((x, y))
Retreves result.multi_hand_landmarks x & y data and converts this into actual position to the frame size. Store into array my_hand for the gesture analysis.
Logic for different gestures – Finger open or close?
We just simply compare the y position of the fingertip (TIP) to the knuckle (MCP). For the thumb, we compare the x position of the fingertip (TIP) to the knuckle (MCP) as shown below,
When thumb is open, finger tip x is bigger then knuckle x. When the fingers are open, finger tip y is smaller than knuckle y.
Unlike the Face Detection project, we used hV to control the Left and Right movement instead of rotation- rV. Oh.. the ‘Left/Right’ in the program is from the user’s point of view, but the drone camera view, is reversed.
That’s all for the project. It is quite straightforward and similar to the Face Detection. Please leave a comment for any inputs and comment. I am now studying how to do swarm flying with 3 Tello Edu, just cleaned up some roadblockers…
New Command – Follow
Just pop-up that I can add a ‘follow’ command, like what was doing for the face detection and tracking. I believe that we just need to copy & paste the code from the Face Detection project with some modifications.
Above is the hand sign for ‘follow’, go ahead to modify the original code with belows,
global gestureglobal hand_center # New line for hand follow
# New line for hand follow elifsum(finger_on)==3and finger_on== finger_on== finger_on==1: gesture ='Follow'#Apply Shoelace formula to calculate the palm size palm_vertexs =[0,1,2,5,9,13,17] area =0for i in palm_vertexs: x1, y1 = my_hand[i] x2, y2 = my_hand[(i +1)%7] area +=(x1 * y2)-(x2 * y1) area =0.5*abs(area) hand_center = my_hand, my_hand, area
gesture ='Unknown'hand_center =480,360,28000# New line for hand follow
# New line for hand followelif gesture =='Follow': x, y, size = hand_centerif x >480: rV =-int((x -480)/4.8)elif x <480: rV =+int((480- x)/4.8)else: rV =0if y >360: vV =-int((y -360)/3.6)elif y <360: vV =int((360- y)/3.6)else: vV =0if size >30000: dV =-15elif size <26000: dV =15else: dV =0 tello.send_rc_control(hV, dV, vV, rV)
For the above codes, we use landmarks – 0, 1, 2, 5, 9, 13 & 17 to calculate the area of the palm, and this number determines how close the palm is to the drone, we target a range of 26000 – 30000. Then, we command it to fly toward or away from the hand, just similar to the face tracking in the last project.
But, what I want to highlight is the palm area calculation, it led me to learn about the Shoelace Formula, which is a very interesting and powerful formula. I don’t remember that I learned this before, maybe returned this to my teacher already 😎. Anyway, have a look at the below video, it’s worth watching and understanding the Shoelace Formula.