MTech Error Control Coding syllabus for 2 Sem 2020 scheme 20ECS23

Module-1 Information theory 0 hours

Information theory:

Introduction, Entropy, Source coding theorem, discrete memoryless channel, Mutual Information, Channel Capacity Channel coding theorem (Chap. 5 of Text 1).

 

Introduction to algebra:

Groups, Fields, binary field arithmetic, Construction of Galois Fields GF (2m ) and its properties, (Only statements of theorems without proof) Computation using Galois field GF (2m ) arithmetic, Vector spaces and Matrices (Chap. 2 of Text 2).

Module-2 Linear block codes 0 hours

Linear block codes:

Generator and parity check matrices, Encoding circuits, Syndrome and error detection, Minimum distance considerations, Error detecting and error correcting capabilities, Standard array and syndrome decoding, Single Parity Check Codes (SPC), Repetition codes, Self dual codes, Hamming codes, Reed-Muller codes. Product codes and Interleaved codes (Chap. 3 of Text 2).

A d v e r t i s e m e n t
Module-3 Cyclic codes 0 hours

Cyclic codes:

Introduction, Generator and parity check polynomials, Encoding of cyclic codes, Syndrome computing and error detection, Decoding of cyclic codes, Error trapping Decoding, Cyclic hamming codes, Shortened cyclic codes (Chap. 4 of Text 2).

Module-4 BCH codes 0 hours

BCH codes:

Binary primitive BCH codes, Decoding procedures, Implementation of Galois field arithmetic. (6.1, 6.2, 6.7 of Text 2) Primitive BCH codes over GF (q),

 

Reed -Solomon codes

(7.2, 7.3 of Text 2).

 

Majority Logic decodable codes:

One -step majority logic decoding, Multiplestep majority logic (8.1, 8.4 of Text 2).

Module-5 Convolution codes 0 hours

Convolution codes:

Encoding of convolutional codes: Systematic and Nonsystematic Convolutional Codes, Feedforward encoder inverse, A catastrophic encoder, Structural properties of convolutional codes: state diagram, state table, state transition table, tree diagram, trellis diagram. Viterbi algorithm, Sequential decoding: Log Likelihood Metric for Sequential Decoding (11.1,11.2, 12.1,13.1 of Text 2).

 

Course outcomes:

At the end of the course the student will be able to:

1. Understand the concept of the Entropy, information rate and capacity for the Discrete memoryless channel.

2. Apply modern algebra and probability theory for the coding.

3. Compare Block codes such as Linear Block Codes, Cyclic codes, etc. and Convolutional codes.

4. Detect and correct errors for different data communication and storage systems.

5. Analyze and implement different Block code encoders and decoders, and also convolutional encoders and decoders including soft and hard Viterbi algorithm.

 

Question paper pattern:

The SEE question paper will be set for 100 marks and the marks scored will be proportionately reduced to 60.

  • The question paper will have ten full questions carrying equal marks.
  • Each full question is for 20 marks.
  • There will be two full questions (with a maximum of four sub questions) from each module.
  • Each full question will have sub question covering all the topics under a module.
  • The students will have to answer five full questions, selecting one full question from each module.

 

Students have to conduct the following experiments as a part of CIE marks along with other Activities:

Software to be used: SCILAB/MATLAB

1. Simulate the BER performance of (7, 4) Hamming code on AWGN channel. Use QPSK modulation scheme. Channel decoding is to be performed through maximum-likelihood decoding. Plot the bit error rate versus SNR (dB), i.e. Pe,b versus Eb/N0. Consider binary input vector of size 5 lakh bits. Use the following parity check matrix for the (7, 4) Hamming code. Also find the coding gain. (Refer: http://www.dsplog.com/2012/03/15/hamming-code-soft-harddecode/ )

2. Simulate the BER performance of (2, 1, 3) binary convolutional code with generator sequences g(1) =(1 0 1 1) and g(2) =(1 1 1 1) on AWGN channel. Use QPSK modulation scheme. Channel decoding is to be performed through Viterbi decoding. Plot the bit error rate versus SNR(dB), i.e. Pe,b versus Eb/N0. Consider binary input vector of size 3 lakh bits. Also find the coding gain.

3. Simulate the BER performance of rate 1/3 Turbo code. Turbo encoder uses two recursive systematic encoders with 𝐺(𝐷) = [1, 1+𝐷4 1+𝐷+𝐷2+𝐷3+𝐷4 ] and pseudo-random interleaver. Use QPSK modulation scheme. Channel decoding is to be performed through maximum a-posteriori (MAP) decoding algorithm. Plot the bit error rate versus SNR(dB), i.e. Pe,b versus Eb/N0. Consider binary input vector of size of around 3 lakh bits and the block length as 10384 bits. Also find the coding gain.

4. Use a MATLAB simulation to confirm that SOVA (Soft Output Viterbi Algorithm) is inferior to MAP decoding in terms of bit error performance, and give the reason why. Consider a rate ½ Turbo code punctured from the rate 1/3 Turbo code. The puncturing matrix is [1 0 ; 0 1] . Demonstrate the decoding process of the code. (Refer: Example 6.1 from ‘A Practical Guide to Error-control Coding Using MATLAB’, Yuan Jiang, ISBN: 9781608070886, Artech House Publishers, 2010)

 

Textbooks:

1. ‘Digital Communication systems’, Simon Haykin, Wiley India Private. Ltd, ISBN 978-81-265-4231-4, First edition, 2014

2. ‘Error control coding’, Shu Lin and Daniel J. Costello. Jr, Pearson, Prentice Hall, 2nd edition, 2004

 

Reference Books:

1. ‘Theory and practice of error control codes’, Blahut. R. E, Addison Wesley, 1984

2. ‘Introduction to Error control coding’, Salvatore Gravano, Oxford University Press, 2007

3. ‘Digital Communications - Fundamentals and Applications’, Bernard Sklar, Pearson Education (Asia) Pvt. Ltd., 2nd Edition, 2001