Hướng phát triển của đề tài

Một phần của tài liệu Dò tìm vật mốc để điều khiển xe lăn điện đến đích dùng stereo camera (Trang 70 - 91)

Từ các kết quả mà đề tài đạt được, có thể kết hợp với các thuật toán huấn luyện thông tin về vị trí của xe lăn và vị trí của các đích đến trong môi trường có nhiều đường đi và đích đến. Từ đó tìm ra đường đi ngắn nhất, thuận tiện nhất trong khoảng thời gian ít nhất để xe lănđến đích.

- 63 -

I LI U THAM KH O

[1] Y. H. S. J. S. Ju, E. Y. Kim, and S. H. Park, "Intelligent wheelchair using face and mouth shape recognition," Consumer Electronics, 2008. ICCE 2008. Digest of Technical Papers. International Conference, Las Vegas,

2008.

[2] J. Z. Yi Zhang, Yuan Luo, "A Novel Intelligent Wheelchair Control System Based," Proceedings of the 2011 IEEE/ICME, International Conference on Complex Medical Engineering, 2011.

[3] A. P. V. S-.Y. Cho, K.W.E. CHENG, "Towards a Brain-Computer Interface Based Control for Next Generation Electric Wheelchairs," 2009 3rd International Conference on Power Electronics Systems and Applications,

2009.

[4] G. M. Tom Carlson, José del R. Millán, "Vision-Based Shared Control for a BCI Wheelchair," International Journal of Bioelectromagnetism, vol. Vol. 13, No. 1, pp. pp. 20 - 21, 2011.

[5] D. C. Abdul R Satti, Girijesh Prasad, "Self-paced Brain-controlled Wheelchair Methodology with Shared and Automated Assistive Control,"

IEEE, 2011.

[6] C. L. T. Qiang Zeng, Brice Rebsamen, and Etienne Burdet, "A Collaborative Wheelchair System," IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, vol. VOL. 16, NO. 2, APRIL 2008. [7] M. Zacharie, "GPS and Discrete Kalman Filter for Indoor Robot

Navigation," World Academy of Science, Engineering and Technology 60,

2011.

[8] J. C. Volodymyr Ivanchenko, William Gerrey and Huiying Shen, "Computer Vision-Based Clear Path Guidance for Blind Wheelchair Users," ASSETS’08,

- 64 -

[9] A. P. M. Marcelo R. Petry, Rodrigo A. M. Braga, Luis Paulo Reis, "Shared Control for Obstacle Avoidance in Intelligent Wheelchairs," IEEE, 2010. [10] H. T. N. Hoang T. Trieu, "Shared Control Strategies for Obstacle Avoidance

Tasks in an Intelligent Wheelchair," 30th Annual International IEEE EMBS Conference, Vancouver, British Columbia, Canada, August 20-24, 2008,

2008.

[11] J. M. Aniket Murarka, and Benjamin Kuipers, "Building Local Safety Maps for a Wheelchair Robot using Vision and Lasers," Third Canadian Conference on Computer and Robot Vision (CRV 2006), 7-9 June 2006, Quebec City, Canada. IEEE Computer Society, 2006.

[12] M. D. Ross Desmond, James Fleming, Dmitry Sinyukov, Jerome Schaufeld, Taskin Padir, "Development of Modular Sensors for SemiAutonomous Wheelchairs," Proc. 2013 IEEE International Conference on Technologies for Practical Robot Applications (TePRA), April. 2013.

[13] J. B. a. Y. Koren, "Error eliminating rapid ultrasonic firing for mobile robot obstacle avoidance," Robotics and Automation, IEEE Transactions on, vol. 11, pp. 132-138, 1995.

[14] D. A. B. S. P. Levine, L. A. Jaros, R. C. Simpson, Y. Koren, and J. Borenstein, "The NavChair Assistive Wheelchair Navigation System," IEEE

TRANSACTIONS ON REHABILITATION ENGINEERING, 1999.

[15] D. G. J. Hoey, A. Mihailidis, and P. Elinas, "Obstacle Avoidance Wheelchair System," The International Conference on Robotics and Automation, Orlando, FL, 2006.

[16] T. H. N. Nguyen, J. S. Pham, D. M. Nguyen, H. T., "Real-time obstacle detection for an autonomous wheelchair using stereoscopic cameras," Conf Proc IEEE Eng Med Biol Soc, pp. 4775-8, 2007.

[17] J. M. A. M. Mata, A. de la Escalera and M. A. Salichs, "Learning visual landmarks for mobile robot navigation," 15th Triennial World Congress, Barcelona, Spain, 2002.

- 65 -

[18] S. M. O. P. F. Alcantarilla, G. L. Mariottini, L. M. Bergasa, and F. Dellaert, "Learning visibility of landmarks for vision-based localization," 2010 IEEE International Conference on Robotics and Automation (ICRA), pp. 4881 - 4888, 2010.

[19] H. W. a. T. Ishimatsu, "Vision-Based Navigation for an Electric Wheelchair Using Ceiling Light Landmark," Journal of Intelligent and Robotic Systems,

vol. 41, pp. 283-314, 2004.

[20] J. K. S. S. Y. Ma, and S. S. Sastry, "An Invitation to 3-D Vision From Images to Geometric Models," IEEE Transactions for Pattern Analysis and Machine Intelligence, pp. 920-932, 2004.

[21] K. T. and O. o. M, "A stereo matching algorithm with an adaptive window: theory and experiment," IEEE Transactions for Pattern Analysis and Machine Intelligence, pp. 920-932, 1994.

[22] F. a. L., "Hierarchical block-based disparity estima-tion considering neighbourhood constraints," In Proceedings Inter-national Workshop on SNHC and 3D Imaging, Rhodes, Greece, pp. 115–122, 1997.

[23] H. T. N. Thanh H. Nguyen, "A Bayesian Recursive Algorithm for Freespace Estimation Using a Stereoscopic Camera System in an Autonomous Wheelchair," American Journal of Biomedical Engineering, vol. 1, pp. 44- 54, 2011.

[24] H. Bay, Tuytelaars, T., and Van Gool, L, "Speeded-Up Robust Features (SURF)," in Computer Vision and Image Understanding, ed, 2008, pp. 110(3): p. 346-359.

[25] D. G. Lowe, "Distinctive Image Features from Scale-Invariant Keypoints,"

- 66 -

PH L C

Phụ lục 1: Chương trình lấy thông tin và x lý thông tin 3D từ stereo

camera

// CameraDLL.cpp : Defines the entry point for the DLL application. // #include "stdafx.h" #include <stdio.h> #include <stdlib.h> #include "include/triclops.h" #include "include/digiclops.h" #include "include/pnmutils.h" #include "include/triclops3d.h" #include "include/triclopsbuffer.h" #include "include/triclopscontext.h" #include "include/triclopsvalidation.h" #include "include/triclopsrectify.h" #include "include/triclopsstereo.h" #include "include/triclopsimageio.h"

BOOL APIENTRY DllMain( HANDLE hModule,

DWORD ul_reason_for_call, LPVOID lpReserved ) { return TRUE; }

inline float round2(float x) { return x > 0 ? (float)((int)(x * 100 + 0.5)) / 100 : (float)((int)(x * 100 - 0.5)) / 100; }

//Sort the arrays

//The x array is the x range visible by the camera //The y array is the y range visible by the camera

//The z array is the closed distance of any objects for each value of x within the range y

void q_sort(float x_arr[], float y_arr[], float z_arr[], int left,

int right) {

int l_hold, r_hold, mid;

float x_pivot, y_pivot, z_pivot;

l_hold = left; r_hold = right;

- 67 -

y_pivot = y_arr[left]; z_pivot = z_arr[left]; while (left < right) {

while ((x_arr[right] >= x_pivot) && (left < right)) right--; if (left != right) { x_arr[left] = x_arr[right]; y_arr[left] = y_arr[right]; z_arr[left] = z_arr[right]; left++; }

while ((x_arr[left] <= x_pivot) && (left < right)) left++; if (left != right) { x_arr[right] = x_arr[left]; y_arr[right] = y_arr[left]; z_arr[right] = z_arr[left]; right--; } } x_arr[left] = x_pivot; y_arr[left] = y_pivot; z_arr[left] = z_pivot; mid = left; left = l_hold; right = r_hold; if (left < mid)

q_sort(x_arr, y_arr, z_arr, left, mid - 1); if (right > mid)

q_sort(x_arr, y_arr, z_arr, mid + 1, right); }

void quickSort(float x_arr[], float y_arr[], float z_arr[], int

array_size) {

q_sort(x_arr, y_arr, z_arr, 0, array_size - 1); }

void __declspec(dllexport) TestDLL2(float *xoutarray, float

*zoutarray) { TriclopsInput stereoData; TriclopsInput colorData; TriclopsImage16 depthImage16; TriclopsColorImage colorImage; TriclopsContext triclops; DigiclopsContext digiclops; TriclopsError te;

- 68 - DigiclopsError de; float x, y, z; //zfinal; //int r, g, b; //FILE* pointFile; int nPoints = 0; int pixelinc ; int i, j, k;

unsigned short* row;

unsigned short disparity;

// open the Digiclops

de = digiclopsCreateContext( &digiclops );

de = digiclopsInitialize( digiclops, 0 );

// get the camera module configuration

de = digiclopsGetTriclopsContextFromCamera( digiclops, &triclops );

// set the digiclops to deliver the stereo image and right

(color) image

de = digiclopsSetImageTypes( digiclops, STEREO_IMAGE |

RIGHT_IMAGE );

// set the Digiclops resolution

// use 'HALF' resolution when you need faster throughput,

especially for

// color images

// digiclopsSetImageResolution( digiclops, DIGICLOPS_HALF );

de = digiclopsSetImageResolution( digiclops, DIGICLOPS_FULL );

// start grabbing

de = digiclopsStart( digiclops );

// set up some stereo parameters:

// set to 320x240 output images

te = triclopsSetResolution( triclops, 240, 320 );

// set disparity range

te = triclopsSetDisparity( triclops, 1,89 ); te = triclopsSetStereoMask( triclops, 9 );//11 (3) te = triclopsSetEdgeCorrelation( triclops, 1 );//1

te = triclopsSetEdgeMask( triclops, 7 );//15 (this value reduces many spots between obstacle)

// lets turn off all validation except subpixel and surface

// this works quite well

te = triclopsSetTextureValidation( triclops, 1 );//0

te = triclopsSetTextureValidationThreshold( triclops, 0.82 );// Jordan fixed from 0 - 1.15

- 69 -

te = triclopsSetUniquenessValidation( triclops, 1 );//0

te = triclopsSetUniquenessValidationThreshold( triclops, 1.18);

// turn on sub-pixel interpolation

te = triclopsSetSubpixelInterpolation( triclops, 1 );//1

// make sure strict subpixel validation is on

te = triclopsSetStrictSubpixelValidation( triclops, 1 );//1

// turn on surface validation

te = triclopsSetSurfaceValidation( triclops, 1 );//100

te = triclopsSetSurfaceValidationSize( triclops, 175 );//200

te = triclopsSetSurfaceValidationDifference( triclops, 1.14 );// Jordan fixed from 0.5 - 0.90

// Set back forth to reduce noise in the readings

te = triclopsSetBackForthValidation( triclops, 1 );

// grab the image set

de = digiclopsGrabImage( digiclops );

// grab the stereo data

de = digiclopsExtractTriclopsInput( digiclops, STEREO_IMAGE, &stereoData );

// grab the color image data

// (note: if you are using a B&W Digiclops, this will of course

be

// in B&W)

de = digiclopsExtractTriclopsInput( digiclops, RIGHT_IMAGE, &colorData );

// preprocessing the images

te = triclopsPreprocess( triclops, &stereoData );

// stereo processing

te = triclopsStereo( triclops ) ;

// retrieve the interpolated depth image from the context

te = triclopsGetImage16(

triclops, TriImg16_DISPARITY, TriCam_REFERENCE, &depthImage16 );

te = triclopsRectifyColorImage(

triclops, TriCam_REFERENCE, &colorData, &colorImage );

int const ELEMENT_COUNT = depthImage16.ncols;

float *x_cur, *y_cur, *z_cur; x_cur = new float[ELEMENT_COUNT];; y_cur = new float[ELEMENT_COUNT];; z_cur = new float[ELEMENT_COUNT];;

for ( i = 0; i < ELEMENT_COUNT; i++ ) {

- 70 -

}

int cur_pos = 0;

// determine the number of pixels spacing per row

//Acquire data and save in x, y, and z (closest z for each

column)

pixelinc = depthImage16.rowinc/2;

for ( i = 0, k = 0; i < depthImage16.nrows; i++ ) {

row = depthImage16.data + i * pixelinc; for ( j = 0; j < depthImage16.ncols; j++, k++ )

{

disparity = row[j]; // filter invalid points

if (disparity<0xFF00) {

// convert the 16 bit disparity value to floating point

x,y,z

triclopsRCD16ToXYZ( triclops, i, j, disparity, &x, &y, &z ); if ((x<=1.7)&&(x>=-1.8)&&(y>=- 0.3)&&(y<=0.6)&&(z>0)&&(z<5)) { if ((z_cur[j]) > (round2(z))) { z_cur[j] = round2(z); x_cur[j] = round2(x); } } } } }

for ( i = 0; i < ELEMENT_COUNT; i++ ) { xoutarray[i] = x_cur[i]; zoutarray[i] = z_cur[i]; } digiclopsStop( digiclops ); digiclopsDestroyContext( digiclops ); triclopsDestroyContext( triclops ); } Phụ lục 2: Chương trình nh n d ng v t mốc

// EasyDLL5.cpp : Defines the entry point for the DLL application. //

#include "stdafx.h"

extern "C"__declspec(dllexport) float __cdecl control(float &theta,

float &status,float &depth, float &width,float &flat1,float

&bientam);

#include <stdio.h>

#include "triclops.h"

- 71 - #include <cv.h> #include <highgui.h> #include <ctype.h> #include <stdlib.h> #include <math.h> #define PI 3.14159265 #include <iostream> #include <vector> using namespace std;

// define whether to use approximate nearest-neighbor search

#define USE_FLANN

IplImage *image = 0;

Double compareSURFDescriptors( const float* d1, const float* d2,

double best, int length )

{

double total_cost = 0;

assert( length % 4 == 0 );

for( int i = 0; i < length; i += 4 ) {

double t0 = d1[i] - d2[i]; double t1 = d1[i+1] - d2[i+1]; double t2 = d1[i+2] - d2[i+2]; double t3 = d1[i+3] - d2[i+3];

total_cost += t0*t0 + t1*t1 + t2*t2 + t3*t3; if( total_cost > best ) break; } return total_cost; }

Int naiveNearestNeighbor( const float* vec, int laplacian,const

CvSeq* model_keypoints,const CvSeq* model_descriptors ) {

int length = (int)(model_descriptors->elem_size/sizeof(float)); int i, neighbor = -1;

double d, dist1 = 1e6, dist2 = 1e6;

CvSeqReader reader, kreader;

cvStartReadSeq( model_keypoints, &kreader, 0 ); cvStartReadSeq( model_descriptors, &reader, 0 ); for( i = 0; i < model_descriptors->total; i++ ) {

const CvSURFPoint* kp = (const CvSURFPoint*)kreader.ptr; const float* mvec = (const float*)reader.ptr;

CV_NEXT_SEQ_ELEM( kreader.seq->elem_size, kreader ); CV_NEXT_SEQ_ELEM( reader.seq->elem_size, reader ); if( laplacian != kp->laplacian )

continue;

d = compareSURFDescriptors( vec, mvec, dist2, length ); if( d < dist1 )

{

- 72 - dist1 = d; neighbor = i; } else if ( d < dist2 ) dist2 = d; } if ( dist1 < 0.6*dist2 ) return neighbor; return -1; }

Void findPairs( const CvSeq* objectKeypoints, const

CvSeq*objectDescriptors,const CvSeq* imageKeypoints, const CvSeq* imageDescriptors, vector<int>& ptpairs )

{

int i;

CvSeqReader reader, kreader;

cvStartReadSeq( objectKeypoints, &kreader ); cvStartReadSeq( objectDescriptors, &reader ); ptpairs.clear();

for( i = 0; i < objectDescriptors->total; i++ ) {

const CvSURFPoint* kp = (const CvSURFPoint*)kreader.ptr; const float* descriptor = (const float*)reader.ptr; CV_NEXT_SEQ_ELEM( kreader.seq->elem_size, kreader ); CV_NEXT_SEQ_ELEM( reader.seq->elem_size, reader );

int nearest_neighbor = naiveNearestNeighbor( descriptor, kp->laplacian, imageKeypoints, imageDescriptors );

if( nearest_neighbor >= 0 ) { ptpairs.push_back(i); ptpairs.push_back(nearest_neighbor); } } }

Void flannFindPairs( const CvSeq*, const CvSeq*

objectDescriptors,const CvSeq*, const CvSeq* imageDescriptors, vector<int>& ptpairs )

{

int length = (int)(objectDescriptors-

>elem_size/sizeof(float));

cv::Mat m_object(objectDescriptors->total, length, CV_32F); cv::Mat m_image(imageDescriptors->total, length, CV_32F);

// copy descriptors

CvSeqReader obj_reader;

float* obj_ptr = m_object.ptr<float>(0);

cvStartReadSeq( objectDescriptors, &obj_reader ); for(int i = 0; i < objectDescriptors->total; i++ ) {

const float* descriptor = (const float*)obj_reader.ptr; CV_NEXT_SEQ_ELEM( obj_reader.seq->elem_size, obj_reader ); memcpy(obj_ptr, descriptor, length*sizeof(float));

- 73 -

}

CvSeqReader img_reader;

float* img_ptr = m_image.ptr<float>(0);

cvStartReadSeq( imageDescriptors, &img_reader ); for(int i = 0; i < imageDescriptors->total; i++ ) {

const float* descriptor = (const float*)img_reader.ptr; CV_NEXT_SEQ_ELEM( img_reader.seq->elem_size, img_reader ); memcpy(img_ptr, descriptor, length*sizeof(float));

img_ptr += length; }

// find nearest neighbors using FLANN

cv::Mat m_indices(objectDescriptors->total, 2, CV_32S); cv::Mat m_dists(objectDescriptors->total, 2, CV_32F);

cv::flann::Index flann_index(m_image,

cv::flann::KDTreeIndexParams(4)); // using 4 randomized kdtrees

flann_index.knnSearch(m_object, m_indices, m_dists, 2,

cv::flann::SearchParams(64) ); // maximum number of leafs checked

int* indices_ptr = m_indices.ptr<int>(0); float* dists_ptr = m_dists.ptr<float>(0); for (int i=0;i<m_indices.rows;++i) {

if (dists_ptr[2*i]<0.6*dists_ptr[2*i+1]) { ptpairs.push_back(i); ptpairs.push_back(indices_ptr[2*i]); } } }

/* a rough implementation for object location */

int

locatePlanarObject( const CvSeq* objectKeypoints, const CvSeq* objectDescriptors,

const CvSeq* imageKeypoints, const CvSeq* imageDescriptors,

const CvPoint src_corners[4], CvPoint

dst_corners[4] ) {

double h[9];

CvMat _h = cvMat(3, 3, CV_64F, h); vector<int> ptpairs;

vector<CvPoint2D32f> pt1, pt2; CvMat _pt1, _pt2;

int i, n;

#ifdef USE_FLANN

flannFindPairs( objectKeypoints, objectDescriptors,

imageKeypoints, imageDescriptors, ptpairs );

#else

findPairs( objectKeypoints, objectDescriptors, imageKeypoints, imageDescriptors, ptpairs ); #endif n = ptpairs.size()/2; if( n < 4 ) return 0; pt1.resize(n);

- 74 - pt2.resize(n); for( i = 0; i < n; i++ ) { pt1[i] = ((CvSURFPoint*)cvGetSeqElem(objectKeypoints,ptpairs[i*2]))->pt; pt2[i] = ((CvSURFPoint*)cvGetSeqElem(imageKeypoints,ptpairs[i*2+1]))->pt; } _pt1 = cvMat(1, n, CV_32FC2, &pt1[0] ); _pt2 = cvMat(1, n, CV_32FC2, &pt2[0] );

if( !cvFindHomography( &_pt1, &_pt2, &_h, CV_RANSAC, 5 )) return 0;

for( i = 0; i < 4; i++ ) {

double x = src_corners[i].x, y = src_corners[i].y; double Z = 1./(h[6]*x + h[7]*y + h[8]);

double X = (h[0]*x + h[1]*y + h[2])*Z; double Y = (h[3]*x + h[4]*y + h[5])*Z;

dst_corners[i] = cvPoint(cvRound(X), cvRound(Y)); }

return 1;

}

void find_obj(IplImage* object,IplImage* frame,int

&tcenter_obj_x,int &tcenter_obj_y ) {

CvMemStorage* storage = cvCreateMemStorage(0);

static CvScalar colors[] =

{ {{0,0,255}}, {{0,128,255}}, {{0,255,255}}, {{0,255,0}}, {{255,128,0}}, {{255,255,0}}, {{255,0,0}}, {{255,0,255}}, {{255,255,255}} };

IplImage* object_grey = cvCreateImage(cvGetSize(object), 8, 1);

cvCvtColor( object, object_grey, CV_RGB2GRAY );

IplImage* image_grey = cvCreateImage(cvGetSize(frame), 8, 1); cvCvtColor( frame, image_grey, CV_RGB2GRAY );

IplImage* object_color = cvCreateImage(cvGetSize(object_grey), 8, 3);

CvSeq *objectKeypoints = 0, *objectDescriptors = 0; CvSeq *imageKeypoints = 0, *imageDescriptors = 0; int i;

- 75 -

cvExtractSURF( object_grey, 0, &objectKeypoints,

&objectDescriptors, storage, params );

cvExtractSURF( image_grey, 0, &imageKeypoints,

&imageDescriptors, storage, params );

CvPoint src_corners[4] = {{0,0}, {object_grey->width,0}, {object_grey->width, object_grey->height}, {0, object_grey- >height}};

CvPoint dst_corners[4];

#ifdef USE_FLANN

//printf("Using approximate nearest neighbor search\n");

#endif

if( locatePlanarObject( objectKeypoints, objectDescriptors, imageKeypoints,

imageDescriptors, src_corners, dst_corners )) {

for( i = 0; i < 4; i++ ) {

CvPoint r1 = dst_corners[i%4]; CvPoint r2 = dst_corners[(i+1)%4];

cvLine( frame, cvPoint(r1.x, r1.y),cvPoint(r2.x, r2.y), colors[0],2 );

}

CvPoint r11 = dst_corners[0]; CvPoint r33 = dst_corners[2];

// determine the center of obj

CvPoint center1;

center1.x = cvRound(r11.x + (r33.x - r11.x)/2); center1.y = cvRound(r11.y + (r33.y - r11.y)/2); cvCircle( frame, center1,5, CV_RGB(0,255,0), -1, 8, 0 ); if ((center1.x >=0)&(center1.y>=0)&(center1.x <=640)&(center1.y<=480)) { tcenter_obj_x = center1.x; tcenter_obj_y = center1.y; } else { tcenter_obj_x = 0; tcenter_obj_y = 0; } } else { tcenter_obj_x = 0; tcenter_obj_y = 0; }

vector<int> ptpairs;

#ifdef USE_FLANN

flannFindPairs( objectKeypoints, objectDescriptors,

imageKeypoints, imageDescriptors, ptpairs );

- 76 -

findPairs( objectKeypoints, objectDescriptors, imageKeypoints, imageDescriptors, ptpairs );

#endif

//cvShowImage( "Object Correspond", frame1);

for( i = 0; i < objectKeypoints->total; i++ ) { CvSURFPoint* r = (CvSURFPoint*)cvGetSeqElem( objectKeypoints, i ); CvPoint center; int radius; center.x = cvRound(r->pt.x); center.y = cvRound(r->pt.y); radius = cvRound(r->size*1.2/9.*2);

cvCircle( object, center, radius, colors[0], 1, 8, 0 ); }

}

__declspec(dllexport) float __cdecl control(float &theta, float

&status,float &depth, float &width,float &flat1,float &bientam) { TriclopsContext triclops; TriclopsImage depthImage; TriclopsImage rightImageRaw; TriclopsInput inputData; DigiclopsContext digiclops; TriclopsROI *rois; int maxRois; int disparity; int i, j; float x, y, z;

// open the Digiclops

digiclopsCreateContext( &digiclops ); digiclopsInitialize( digiclops, 0 );

// get the camera module configuration

digiclopsGetTriclopsContextFromCamera( digiclops, &triclops );

// set the digiclops to deliver the stereo image only

digiclopsSetImageTypes( digiclops, STEREO_IMAGE );

// set the Digiclops resolution

// use 'HALF' resolution when you need faster throughput, especially for

// color images

// digiclopsSetImageResolution( digiclops, DIGICLOPS_HALF );

digiclopsSetImageResolution( digiclops, DIGICLOPS_FULL );

// start grabbing

digiclopsStart( digiclops );

// set up some stereo parameters: // set to 320x240 output images

//triclopsSetResolution( triclops, 240, 320 );

triclopsSetResolution( triclops, 480, 640 );

// set disparity range

triclopsSetDisparity( triclops, 5, 60 );

// set invalid pixel value

- 77 -

triclopsSetUniquenessValidationMapping( triclops, 0 );

unsigned char *row;

// grab an image

digiclopsGrabImage( digiclops );

digiclopsExtractTriclopsInput( digiclops, STEREO_IMAGE, &inputData );

// Preprocessing the images

triclopsPreprocess( triclops, &inputData );

// stereo processing

triclopsStereo( triclops );

// retrieve the depth image from the context

triclopsGetImage( triclops, TriImg_DISPARITY,

TriCam_REFERENCE, &depthImage );

digiclopsWritePPM( digiclops, RIGHT_IMAGE, "right.ppm"

);

int m=0;

float flag_vatcan;

float *mang_Z = new float[m];

float *mang_X = new float[m];

float *mang_Row = new float[m];

float *mang_Col = new float[m];

for (i = 0; i<480; i++) { for ( j = 0; j < 640; j++) { disparity = depthImage.data[j * depthImage.rowinc + j]; if ( disparity >= 0 ) { triclopsRCD8ToXYZ( triclops, j, i, disparity, &x, &y, &z );

}

if ((z > 0.5)&&(z<1.2)) {

mang_Z[m] = z; mang_X[m] = x;

mang_Row[m]= float(i); mang_Col[m]= float(j); m++; } } } if ( m <= 10 ) { flag_vatcan = 1; } else { flag_vatcan = 0; }

IplImage* img_original = cvLoadImage(

"right.ppm",CV_LOAD_IMAGE_COLOR);

- 78 - if ( flag_vatcan == 1) {

int k;

char* names[] = { "so4.jpg","so5.jpg",0 };

int center_obj_x[2] = {0,0};

int center_obj_y[2] = {0,0};

int center_obj0_x = 0;

int center_obj0_y = 0;

for( k = int(bientam); k < 2; k++ ) {

IplImage* object1 = cvLoadImage(names[k],

CV_LOAD_IMAGE_COLOR);

find_obj(object1,img_original,center_obj0_x,center_obj0_y); center_obj_x[k] = center_obj0_x;

center_obj_y[k] = center_obj0_y; }

for( k = int(bientam); k < 2; k++ ) { if ((center_obj_x[k]!=0)||(center_obj_y[k]!=0)) { center_obj0_x = center_obj_x[k]; center_obj0_y = center_obj_y[k]; bientam = float(k+1); break; } else { bientam = float(k); break; } }

cvShowImage( "Object Correspond", img_original);

disparity = depthImage.data[center_obj0_y *

depthImage.rowinc + center_obj0_x];

// invalid pixel?

if ( disparity >= 0 ) {

triclopsRCD8ToXYZ( triclops, center_obj0_y,

center_obj0_x, disparity, &x, &y, &z ); }

// du lieu ban dau

depth = z; width = x; float x1; float check; if ((0.5 < z)&&(z<2.0)) { check = 1; if (x < 0) { x1 = abs(x); theta = cvRound((atan(x1/z)*180)/PI);

- 79 - } else { theta = cvRound((atan(x/z)*180)/PI); } } else { check = 0; } if ((0 <= theta)&& (theta <= 5))

Một phần của tài liệu Dò tìm vật mốc để điều khiển xe lăn điện đến đích dùng stereo camera (Trang 70 - 91)

Tải bản đầy đủ (PDF)

(91 trang)