Simulation of LDPC convolutional decoders with CPU and GPU

Chi H. Chan, Chung Ming Lau

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

2 Citations (Scopus)

Abstract

In this paper, the Sum Product Algorithm (SPA) and the Min-Sum Algorithm (MSA) are used for decoding low-density parity-check convolutional codes (LDPC-CCs). The two algorithms have been implemented and run on three different computing environments. The first environment is a single-threading Central Processing Unit (CPU); the second one is the multi-threading CPU based on OpenMP (Open Multi-Processing); and the third one is the multi-threading Graphics Processing Unit (GPU). The error performance of the LDPC-CCs and the simulation time taken under the three specific computing environments and the two decoding algorithms are evaluated and compared. It is found that the different computing environments produce very similar error results. It is also concluded that using the GPU computing platform can reduce the simulation time substantially.
Original languageEnglish
Title of host publication2012 2nd International Conference on Consumer Electronics, Communications and Networks, CECNet 2012 - Proceedings
Pages2854-2857
Number of pages4
DOIs
Publication statusPublished - 11 Jun 2012
Event2012 2nd International Conference on Consumer Electronics, Communications and Networks, CECNet 2012 - Three Gorges, China
Duration: 21 Apr 201223 Apr 2012

Conference

Conference2012 2nd International Conference on Consumer Electronics, Communications and Networks, CECNet 2012
Country/TerritoryChina
CityThree Gorges
Period21/04/1223/04/12

Keywords

  • CPU
  • error-correction code
  • GPU
  • LDPC convolutional code
  • OpenMP

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Simulation of LDPC convolutional decoders with CPU and GPU'. Together they form a unique fingerprint.

Cite this