Localization, Obstacle avoidance, Control and Kinematics [LOCK Part-1]

####Introduction Robotics is a multi-disciplinary field with its heart dependent on the bleeding edge algorithms of AI, Control Optimization etc. There lies its inherent potential of delivering theory intensive courses like control theory, artificial intelligence and computer vision in an intuitive and informal manner.

Through a series of three tutorials you will ready to set up your own differential steering robot capable of navigating to given any co-ordinate while avoiding obstacles with only a static camera mounted on the ceiling as shown below. By following and implementing the code examples on LOCK, you will be ready to build more interesting and complex systems. Now lets crack on.

####Tools needed

  • USB HD webcam
  • opencv 2.4.x
  • git

####Installing OpenCv For running the code in this post you need OpenCv 2.4.x bindings.

  • Ubuntu 14.04
  • I have never set up OpenCv in a windows platform but it will definitely be way easier than any linux system.

####What is localization? In robotics, localization is fundamental task. Especially in wheeled robots it is performed using odometers or using a camera to perform visual odometry. However, in our toy problem the localization task involves to find the coordinates of the bot using a webcam mounted on the ceiling. The performance constraints that force us to use c++. Again identification of bot can be performed in a variety of ways. Since we consider this an introduction of sorts, we simply prefer to cover our bot with a hard color for simplifying the task of detection.

####Algorithm The algorithm can be stated in three simple steps

  • Threshold the image to obtain a binary image.
  • Find the biggest contour or blob.
  • Find the center of mass.

#Localization implementation class with explanation This necessary to identify the pose of the robot.

void GetXY(Mat frame)
        {
            Mat imgHSV;
            namedWindow("MyVideo",CV_WINDOW_AUTOSIZE);
            cvtColor(frame, imgHSV, COLOR_BGR2HSV); //Convert the captured frame from BGR to HSV
            Mat imgThresholded;
            Mat greenthresh;
            inRange(imgHSV, Scalar(yLowH, yLowS, yLowV), Scalar(yHighH, yHighS, yHighV), imgThresholded); //Threshold the image
            inRange(imgHSV, Scalar(gLowH, gLowS, gLowV), Scalar(gHighH, gHighS, gHighV), greenthresh);
            //morphological opening (removes small objects from the foreground)
            erode(greenthresh, greenthresh, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) );
            erode(imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) );
            dilate( greenthresh, greenthresh, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) ); 
            dilate( imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) ); 
            //morphological closing (removes small holes from the foreground)
            dilate( imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) ); 
            dilate( greenthresh, greenthresh, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) ); 
            erode(imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) );
            erode(greenthresh, greenthresh, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) );

            Mat canny_output;
            vector<vector<Point> > contours;
            vector<vector<Point> > gcontours;
            vector<Vec4i> hierarchy;
            vector<Vec4i> ghierarchy;

            findContours(imgThresholded,contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
            findContours(greenthresh,gcontours, ghierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
            //cout<<contours.size()+gcontours.size()<<endl;
            vector<Moments> mu(contours.size() );
            vector<Moments> gmu(gcontours.size() );

            for( int i = 0; i < contours.size(); i++ )
            { mu[i] = moments( contours[i], false ); }

            for( int i = 0; i < gcontours.size(); i++ )
            { gmu[i] = moments( gcontours[i], false ); }
            int maxArea =0;
            int bestcnt =0;
            for(int i = 0; i < contours.size(); i++)
            { 
                if (mu[i].m00 > maxArea)
                {
                    maxArea = mu[i].m00;
                    bestcnt = i;
                }
            }
            int gmaxArea = 0;
            int gbestcnt = 0;
            for(int i = 0; i < gcontours.size(); i++)
            { 
                if (gmu[i].m00 > gmaxArea)
                {
                    maxArea = gmu[i].m00;
                    gbestcnt = i;
                }
            }

            if (!((gcontours.size()==0) || (contours.size()==0)))
            {
                Yc = Point(mu[bestcnt].m10/mu[bestcnt].m00 , mu[bestcnt].m01/mu[bestcnt].m00 );
                Gc = Point(gmu[gbestcnt].m10/gmu[gbestcnt].m00 , gmu[gbestcnt].m01/gmu[gbestcnt].m00 );
            }
        }
}
double GoToGoal(Point goal){
            Point oriented_dir = Gc-Yc;
            Point goal_dir = Gc - goal;
            double dot = oriented_dir.x*goal_dir.x + oriented_dir.y*goal_dir.y;;
            double det = oriented_dir.x*goal_dir.y - oriented_dir.y*goal_dir.x;
            //cout<<((atan2(det,dot))*180/PI)<<endl;
            return (atan2(det,dot));
        }
}

####Making Detection Intensity Invariant The problem of using the above algorithm is that it requires re-tuning of the threshold values as they in intensity. However by implementing this small hack we can make almost intensity invariant.