Finding alternative technologies for GNSS-denied environments is a key to extend the capability and robustness of autonomous vehicle and mapping application. A solution to the problem is vision-simultaneous-localization and mapping. Since cameras are light weights, robust and passive sensors, they are leading candidates for GNSS-denied environment technology. Accuracy and robustness are the two main concerns regarding these technologies. While high accuracy is achieved thanks to loop-closing (correct position when crossing places that where already visited) , robustness is achieved thanks to an accurate short-term visual odometry. Hence SOFT-SLAM  the currently top-ranking stereo vision methods on KITTI benchmark  focused on pure visual odometry  before to deal with Simultaneous Localization And Mapping. In this paper we present a novel algorithm for fast and robust stereo odometry based on hybrid stereo and monocular algorithm. First, interest points in the images are selected using circular matching of features between left and right, current and next images, using a sparse feature descriptor described in Stereoscan . Then rotation and translation between two consecutive poses are estimated separately. A mean square is used for translation estimation whereas a parametrization of epipolar constraint similar to  is used for rotation estimation. Experimental results show that the proposed algorithm achieves state of the art translation error on KITTI benchmark using the KITTI evaluation metric . According to this metric, it has already lower mean translational and rotational error than state of the art SLAM algorithm such as ORB-SLAM2 while our algorithm is a pure visual odometry algorithm. We also tested our algorithm in inertial aided situation using the EuRoC MAV dataset where we also achieved competitive results. Our algorithm processes a frame in 0.07s on average on a single core at 3Ghz. This allows real time odometry outputs.