Motion capture, or mocap, is a prevalent technique for capturing and analyzing hu- man articulations. Nowadays, mocap data are becoming one of the primary sources of realistic human motions for computer animation as well as education, training, sports medicine, video games, and special effects in movies. As more and more applications rely on high-quality mocap data and huge amounts of mocap data become available, there are imperative needs for more effective and robust motion capture techniques, bet- ter ways of organizing motion databases, as well as more efficient methods to compress motion sequences. I propose a data-driven, segment-based, piecewise linear modeling approach to ex- ploit the redundancy and local linearity exhibited by human motions and describe human motions with a small number of parameters. This approach models human mo- tions with a collection of low-dimensional local linear models. I first segment motion sequences into subsequences, i.e. motion segments, of simple behaviors. Motion seg- ments of similar behaviors are then grouped together and modeled with a unique local linear model. I demonstrate this approach's utility in four challenging driving prob- lems: estimating human motions from a reduced marker set; missing marker estimation; motion retrieval; and motion compression.