In order to accurately recognize and estimate the lane-changing intentions of vehicles, a vehicle lane change intention recognition model based on TCN-LSTM network is proposed, which combines the temporal processing capability of TCN (Temporal Convolutional Network) with the gate memory mechanism of LSTM (Long Short Term Memory Network). In the investigation, firstly, the driving intentions of the target vehicle are divided into three types, namely going straight, changing lanes to the left, and changing lanes to the right. The running state indicators of the target vehicle and its surrounding neighboring vehicles (including the adjacent front and rear vehicles in the same lane, left lane and right lane) are extracted from the Citysim vehicle trajectory dataset using the median filtering algorithm. Secondly, to overcome the low recognition accuracy, long training time and slow parameter updating existing in statistical theories and traditional machine learning methods, the dilated convolution technique is used to extract the temporal features of time series, and the gate memory units are used to capture the long-term dependency relationships of temporal features. With 54 indicators, including the speed, acceleration, heading angle, heading angle change rate, and relative position information of the target vehicle and surrounding neighboring vehicles, as input parameters, and with the lane change intention of the vehicle as the output indicator, a vehicle lane -change intention recognition model based on the TCN-LSTM network is constructed. Finally, the recognition accuracy of TCN, SVM (Support Vector Machines), LSTM, and TCN-LSTM models under different input time steps are comparatively analyzed. The results show that, when the input time series length is 150 frames, the recognition accuracy of the TCN-LSTM model reaches a maximum of 96.67%; and that, in terms of overall classification accuracy, as compared with LSTM, TCN and SVM models, the TCN-LSTM model improves the classification correctness of lane change intention by 1.34, 0.84 and 2.46 percentage points, respectively, which demonstrates better classification performance.