In this work we study the use of 3D hand poses to recognize first-person hand actions interacting with 3D objects. Towards this goal, we collected RGB-D video sequences of more than 100K frames of 45 daily hand action categories, involving 25 different objects in several hand grasp configurations. To obtain high quality hand pose annotations from real sequences, we used our own mo-cap system that automatically infers the location of each of the 21 joints of the hand via 6 magnetic sensors on the finger tips and the inverse-kinematics of a hand model. To the best of our knowledge, this is the first benchmark for RGB-D hand action sequences with 3D hand poses. Additionally, we recorded the 6D (i.e. 3D rotations and locations) object poses and provide 3D object models for a subset of hand-object interaction sequences. We present extensive experimental evaluations of RGB-D and pose-based action recognition by 18 baselines/state-of-the-art. The impact of using appearance features, poses and their combinations are measured, and the different training/testing protocols including cross-persons are evaluated. Finally, we assess how ready the current hand pose estimation is when hands are severely occluded by objects in egocentric views and its influence on action recognition. From the results, we see clear benefits of using hand pose as a cue for action recognition compared to other data modalities. Our dataset and experiments can be of interest to communities of 6D object pose, robotics, and 3D hand pose estimation as well as action recognition.