Monday, June 3, 2019

Evaluation Of An Error Control Codec Information Technology Essay

Evaluation Of An phantasm Control computer cipherc selective information Technology EssayThe assignments object is to design and evaluate an illusion encounter statutec. This aims to prove in practice the overact code theory.In the first part there is a design of an encoder and its simulation. From the encoder simulation we pot figure how the code address atomic spell 18 generated and when a codeword is valid.The decoder purpose is to recover the codeword from the received word. To accomplish this, syndrome theory was used. A design and a simulation of the decoder is shown in answer 2.Final, a codec is designed with an addition of XOR gates to introduce flaws. The main reason of this is to understand why Hamming code can detect 2 defects and correct only one.Introduction to Hamming linear stanch codesNoise causes wrongdoings ( selective information distortion) during transmittance. So, a received subject has numeral erroneous beliefs. A repercussion of noise is t he rewrite of one or more bits of a codeword. Alteration of a bit manner inversion of its situation because signals have binary form. Some examples of noisy communicating channels are a) an analogue telephone line which, over which deuce modems communicate digital information and b) a disk drive. There are two solutions that can achieve perfect communication over an imperfect noisy communication channel, physical and system solution. Physical modifications increase the appeal of the communication channel.Information theory and coding theory offer an alternative approach we accept the given noisy channel as it is and add communication systems to it, so that we can detect and correct the errors introduced by the channel. As shown in figure 1, we add an encoder before the channel and a decoder after it. The encoder encodes the source pith s into a transmitted message t, adding redundancy to the original message in some way. The channel adds noise to the transmitted message, yieldin g a received message r. The decoder uses the known redundancy introduced by the encoding system to infer both the original signal and the added noise.Figure 1 Error correcting codes for the binary harmonious channel 1The only cost for system solution is a computational requirement at the encoder and decoder.Error undercover work and error subject areaIn regularize to make error correction possible, the bit errors must(prenominal) be detected. When an error has been detected, the correction can be obtained by a) receiver asks for repeated transmission of the erroneous codeword until a correct one has been received Automatic Repeat Request (ARQ) b) using the structure of the error correcting code to correct the error Forward Error Correction (FEC). Forward Error Correction is been use for this assignment.Error DetectionAutomatic Repeat RequestForward Error CorrectionBlock CodeBlock CodeConvolutional CodeFigure 2 The main methods to introduce error correction codingLinear bl ock codesLinear block codes are a class of parity check codes that can be characterized by the (n, k) notation. The encoder transforms a block of k message digits into a longer block of n codeword digits constructed from a given alphabet of elements. When the alphabet consists of two elements (0 and 1), the code is a binary code comprising binary digits (bits).4 The current assignment of linear block codes is curb to binary codes.The output of an information source is a sequence of binary digits 0 or 1 (since we discuss about binary codes). In block coding, this binary information sequence is segmented into message blocks of fixed length. Each block can represent whatever of 2k distinct messages. The channel encoder transforms each k-bit data block into a larger block of n bits, called code bits. The (n-k) bits, which the channel encoder adds to each data block, are called redundant or parity bits. senseless or parity bits carry no information. Such a code is referred to as an ( n, k) code. 5 The encoding result is the codeword.Any seed matrix of an (n, k) code can be reduced by row operations and column permutations to the systematic form. 6 We call a systematic code the code where the first k digits (information or message bits) of the code word are exactly the same as the message bits block and the last n-k digits are the parity bits as it shown below.Message Information bitsRedundant or parity bitskn-kn digit codewordFigure 3 (n, k systematic block code)Encoding and Decoding of Linear Block CodesThe generator matrix is a matrix of basis vectors. The generator matrix G for an (n, k) block code can be used to generate the appropriate n-digit codeword from any given k-digit data sequence. The H and corresponding G matrices for the current assignment block code (6, 3) are shown below H is the parity check matrix G is the generator matrixThe first three columns are the data bits and the rest three columns are the parity bits. Systematic code words are so metimes written so that the message bits occupy the left glide by portion of the codeword and the parity bits occupy the right hand portion. This reordering has no depression on the error detection or error correction properties of the code.4 Study of G shows that on the left of the dotted partition there is a 33 unit diagonal matrix and on the right of the partition there is a parity check section. This part of G is the transpose of the left hand portion of H. As this code has a unity error correcting capability then dmin, and the weight of the codeword must be 3. As the identity matrix has a wizard one in each row then the parity check section must contain at least two ones. In addition to this constraint, rows cannot be identical. 7 The parity check bits are selected so they are independent of each other.The Hamming distance between two code words is outlined as the go of bits in which they differ. The weight of a binary codeword is defined as the number of ones which it cont ains (the number of the nonzero elements-bits).The codeword is given by the multiplication of data bits and the generator matrix. The operations of modulo-2 multiplication (AND) and modulo-2 addition (EXOR) are used for the binary field. EXOR addition AND multiplicationThe parity check equations are shown belowIf the components of the output transmission satisfy these equationsthen the received codeword is valid.These equations can be written in a matrix formwhere c is the codeword.The syndromeLet c be a code vector which was transmitted over a noisy channel. At the receiver we might obtain a corrupted vector r. decoder must recover c from r. The decoder computes,S=Hrwhere S is called the syndrome and r is the received vector (arranged as a column vector) then if,then r is not a code word. The syndrome is the result of a parity check performed on r to determine whether r is a valid member of the codeword set. If r is a member the syndrome S has a value 0. If r contains detectable errors, the syndrome has some nonzero value. The decoder will take actions to locate the errors and correct them in the upshot of FEC.No column of H can be all zeros, or else an error in the corresponding codeword position would not affect the syndrome and would be undetectable. all columns of H must be unique. If two columns of H were identical, errors in these two corresponding codeword positions would be indistinguishable. 4Hamming code can correct a oneness bit error and detect two bit errors assuming no correction is attempted. closures to assignment questionsTask 1Design the encoder for a (6,3) Hamming single error correcting codec using the interleaved P1P2D1P3D2D3 format. You can implement your parity generation using XOR gates. Simulate your circuit to check for correct operation.Answer 1An encoder is a fraud used to change a signal (such as a bitstream) or data into a code. The code may serve any of a number of purposes such as compressing information for transmission o r storage, encrypting or adding redundancies to the input code, or translating from one code to another. This is usually done by means of a programmed algorithm, especially if any part is digital, while most analog encoding is done with analog circuitry. 3 Encoder creates the codeword in a combination of information and parity bits.Interleaving is a way to arrange data in a non-contiguous way in order to increase performance and avoid burst errors. In our fictitious character we use interleaved to protect the data bits from continuous error.Figure 4 Encoder design for a (6,3) Hamming single error correcting codecSince the encoder is for a (6,3) Hamming single error correcting codec, that means there are 3 information bits and 3 parity bits. Thus, 8 code words are generated from the encoder 2k where k are information bits 23=8.The H and G matrix are shown below for a (6, 3) Hamming codeAll possible code wordsMessage x G =CodewordWeight000 x G0000000001 x G0010113010 x G0101013100 x G1001103011 x G0111104101 x G1011014110 x G1100114111 x G1110003 circumvent 1 All possible code wordsThe minimum distance is dmin=3Figure 5 Encoder simulationChecking if c=(D1D2D3P1P2P3) is a codewordThe EXOR gate () is a logic gate that gives an output of 1 when only one of its inputs is 1.X1 (Input)X2 (Input)Y1 (Output)000011101110Table 2 Truth table of EXOR Gate c is a valid codeword.Task 2Design the decoder for a (6,3) Hamming single error correcting coded using the interleaved P1P2D1P3D2D3 format. You can use a 3-to-8 line decoder for syndrome decoding and XOR gates for the controlled inversion. Simulate your circuit to check for correct operation.Answer 2A decoder is a device which does the reverse of an encoder, undoing the encoding so that the original information can be retrieved. The same method used to encode is usually just reversed in order to decode. 3Decoder tries to reveal the correct data word from the codeword. Thus means that here is the process where dete ction and correction of codeword take place.Figure 6 Decoder design for a (6.3) Hamming single error correcting codecDecode the received codewordFigure 7 Decoder simulationr is the received word 111000 r is code wordTask 3Join your encoder to decoder and add an XOR gate with an input in each bit transmission line to allow you to introduce errors into the transmission. Simulate your circuit and check that it can cope with the six single errors as expected.Answer 3Figure 8 Codec desingFigure 9 Six single errorsAs it shown from the above figure the codec can cope with the six single errors. This is possible becauseMessage x G =EncoderCodewordWeight000 x G0000000001 x G0010113010 x G0101013100 x G1001103011 x G0111104101 x G1011014110 x G1100114111 x G1110003Table 3 All possible code words and their hamming weightThe minimum distance of a linear block code is defined as the smallest Hamming distance between any pair of code words in the code.5The minimum distance is dmin=3.The er ror correcting capability t of a code is defined as the maximum number of guaranteed correctable errors per codeword.where t is the error correcting capabilityFor dmin=3 we can see that all t=1 bit error patterns are correctable.In general, a t-error correcting (n, k) linear code is capable of correcting a total of 2n-k error patterns.4Task 4By experimenting with your implemented codec, examine the effect, in terms of additional errors, of (i) all 15 double errors, (ii) all 20 triple errors, (iii) all 15 quadruple errors, (iv) all 6 quintuple errors, (v) the single sextuple error. Note. You only need consider one of the 8 possible input data words. Why?Answer 4(i)Figure 10 15 double errors(ii)Figure 11 20 triple errors(iii)Figure 12 15 quadruple errors(iv)Figure 13 6 quintuple errors(v)Figure 14 The single sextuple errorSince the error correcting capability is t1, our codec cant detect or correct more than 1 error. Thus, the above results.Task 5Calculate the post codec hazard of a code being in error, A(n), for each of the five categories examined in Task 4. Then calculate the overall number of errors per 6 bit word, Eav, given by the following model based on the binomial distributionas function of the channel bit error probability p. Plot the decoded error probability as function of p. Over what range of p do you conclude that this codec is useful and why?Answer 5A(n)=1-(number of correct errors/number of total errors)A(n) is going to be always 1 except the case where the codec detects and corrects 1 single error then A(n)=1 victimisation matlab for the plotp=00.011Eav=15*p.2.*(1-p).4+20*p.3.*(1-p).3+15*p.4.*(1-p).2+6*p.5.*(1-p).1+p.6.*(1-p).0pd=Eav/6plot(p,pd)xlabel(Bit error probability (p))ylabel(Decoder error probability Pd(p))grid onFigure 15 Plot of decoder error probability (pd) as function of pConclusionsParity bits must be added for the error detection and correction.Hamming distance is the criterion for error detection and correction.Error d etection can be done with addition of one parity bit but error correction needs more parity bits (Hamming code).Hamming code can detect 2 bit errors assuming no correction is attempted.Hamming code can correct only a single bit error.The ability to correct single bit errors comes at a cost which is less than sending the entire message twice. Sending a message twice is not accomplish an error correction.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.