Acoustic scene identification aims to identify the acoustic environment from the acoustic signal. Usually one first divides a piece of acoustic signal into multiple short-time frames and then calculates frame-level features. A natural question is then how to make use of these frame-level features for identification purposes. In this paper, we compare two feature aggregation methods. One method is Majority Voting (MV), which treats each frame-level feature as an independent feature vector and then perform identification using majority voting strategies. In this way, an acoustic signal is represented by multiple feature vectors. The other method is Supervector, which maps the frame-level features to a single feature vector. In this way, an acoustic signal is represented by one feature vector. Particularly, we consider three types of Supervector, which are Gaussian Supervector, Factor Analysis Supervector, and i-vector. We then compare Supervector with MV in an acoustic identification task. Different classifiers are employed, including Gaussian Mixture Model (GMM), Support Vector Machine (SVM), Multilayer Perceptron (MLP), and Deep Neural Network (DNN). Experimental results indicate that these two feature aggregation methods give very similar performances, nonetheless, each has its own advantages and disadvantages.