Identifying a workpiece in industrial processes using depth sensors has received increasing attention over the past few years. However, this is a challenging task particularly when the object is large or cluttered. In these scenarios, captured point clouds do not provide sufficient information to detect the object. To address this issue, we present a hierarchical fragment matching method for 3D object detection and pose estimation. We build a library of object fragments by scanning the object from different viewpoints. A descriptor, named Clustered Centerpoint Feature Histogram (CCFH), is proposed to compute the features for each fragment. The proposed method aims to enhance the robustness of the existing Clustered Viewpoint Feature Histogram (CVFH) descriptor. Subsequently, an Extreme Learning Machine (ELM) classifier is applied to identify the matched segments between the scene and the library of fragments. Finally, the pose of the object in the scene is estimated using the matched segments. Unlike existing approaches that require the CAD model of the object or pre-registration process, the proposed method directly use the scanned point clouds of the object. The experimental results are presented to illustrate the performance of the proposed method.