Análisis Proceso de Compra de Productos

Top PDF Análisis Proceso de Compra de Productos:

An Area Efficient (31, 16) BCH Decoder for Three Errors

An Area Efficient (31, 16) BCH Decoder for Three Errors

In practice, the binary BCH and Reed-Solomon codes are the most commonly used variants of BCH codes. They have applications in a variety of communication systems: digital subscriber loops, wireless systems, networking communications, and magnetic and data storage systems. Continuous demand for ever higher data rates and storage capacity provide an incentive to devise very high-speed and space-efficient VLSI implementations of BCH decoders. The first decoding algorithm for binary BCH codes was devised by Peterson in 1960.
Show more

Read more

Support Vector Machine based Decoding Algorithm for BCH Codes, Journal of Telecommunications and Information Technology, 2016, nr 2

Support Vector Machine based Decoding Algorithm for BCH Codes, Journal of Telecommunications and Information Technology, 2016, nr 2

In communication systems, there is an increasing demand for reliable and efficient transmission of data. When data is transmitted over a noisy communication channel, errors are bound to occur. Error control coding techniques are used to detect and correct these errors. The two main types of error correcting codes are block and convolutional codes. Bose Chaudhuri Hocquenghem (BCH) codes are a type of cyclic error correcting block codes with applications in digital, space and satellite communications. Conventional Hard Decision Decoding (HDD) algorithms like Peterson- Gorenstein-Zierler algorithm and Berlekamp-Massey (BM) algorithm have a standard error correcting capability of t = j d min
Show more

Read more

Complexity Reduction Area Efficient BCH Code for NAND Flash Memory

Complexity Reduction Area Efficient BCH Code for NAND Flash Memory

The future work includes: The 15 bit codeword length (n) and 5 bit message length (k) may correct upto three errors (t) while transmission of data. But upto 3 correctable error technique is not enough for real time applications. Due to this reason, the codeword length (n) can be increased along with them a newalgorithm is adopted instead of Berlekamp Massey Algorithm in the next step design.Thus,decoding latency and also complexity can be reduced and can be implemented in Spartan 3 FPGA.

Read more

Grobner Bases Method for Biometric Traits Identification and Encryption

Grobner Bases Method for Biometric Traits Identification and Encryption

Once a biometric trait is captured, residual random noise is removed by using filters (as a directed smoothing process); see for instance [13]. Then, the extracted feature vector or codeword, as we will more commonly call (see below), is turned out as a binary vector. Since the binary feature vectors of biometric templates acquired from the same person are most probably different from each other, it is necessary to detect and rectify the dif- ference between the data acquired in the enrollment and verification steps [14] [15]. This correction takes the place of the normal templates matching in present biometric systems. Therefore, the matching between the enrolled and verified feature vectors can be modeled as transmitting messages through a noisy communication channel. Thus, the proposed matching algorithm measures diversities between the extracted and enrolled code- words. Each codeword (or bit string) is represented (or encoded) into secure data by computing its syndrome with respect to a preferable low-density parity-check (LDPC) code. The LDPC codes were first discovered by Gallager in 1962 [16]. In this work we develop a new biometric authentication algorithm that is not only based on the data contained in biometric templates but also on a randomly selected codeword from an LDPC code. Consequently, we propose a syndrome decoding algorithm (as replacement of matching process) based on Grӧbner bases calculations as a decoding scheme for the combination of data contained in biometric traits and suggested codewords.
Show more

Read more

Modified Huffman Algorithm for Image Encoding and Decoding

Modified Huffman Algorithm for Image Encoding and Decoding

3.2. Efficient Huffman table specification We tried several different ad-hoc mechanisms to store the Huffman tables. The problem was to find a mechanism that performed well for all possible Huffman tables. We ended up with a mechanism that performed quite well, which we will now briefly describe. This mechanism utilizes 4 bits to give section of first symbol, then for each of the next symbols a code to tell its section where

Read more

An Iterative Decoding Algorithm for Fusion of Multimodal Information

An Iterative Decoding Algorithm for Fusion of Multimodal Information

In many applications like ASR, well-trained unimodal models might already be available. Iterative decoding uti- lizes such models directly. Thus, extending the already ex- isting unimodal systems to multimodal ones is easier. An- other common scheme used to integrate unimodal HMMs is the product HMM [15]. In our simulations, we see that the product rule performs as well as the joint model. But the product rule has the added disadvantage that it assumes a one-to-one correspondence between the hidden states of the two modalities. The generalized multimodal version of the iterative decoding algorithm (see Section 5) relaxes this re- quirement. Moreover, the iterative decoding algorithm per- forms better than the joint model and the product HMM in the presence of background noise, even in cases where there is a one-to-one correspondence between the two modalities. In noisy environments, the frames affected by noise in different modalities are at best nonoverlapping and at worst independent. The joint models are not able to separate out the noisy modalities from the clean ones. Because of this rea- son, the iterative decoding algorithm outperforms the joint model at low SNR. In case of other decision-level fusion al- gorithms like the multistream HMMs [16] and reliability- weighted summation rule [17], one has to estimate the qual- ity (SNR) of the individual modalities to obtain good per- formance. Iterative decoding does not need such a priori in- formation. This is a very significant advantage of the itera- tive decoding scheme because the quality of the modalities is in general time-varying. For example, if the speaker keeps turning away from the camera, video features are very un- reliable for speech segmentation. The exponential weighting scheme of multistream HMMs requires real-time monitor- ing of the quality of the modalities which in itself is a very complex problem.
Show more

Read more

Optimizing Chien Search usage in the BCH Decoder

Optimizing Chien Search usage in the BCH Decoder

ABSTRACT:In the decoding of the Bose ChaudhuriHochquenghem (BCH) codes, the most complex block is the Chien search block. In the decoding process of the BCH codes, error correction is performed bit by bit, hence they require parallel implementation. The area required to implement the Chien search parallel is more, hence a strength reduced parallel architecture for the Chien search is presented. In this paper, the syndrome computation is done using conventional method, the inversion-less Berlekamp Massey Algorithm is used for the solving the key equations.
Show more

Read more

Efficient Incremental Decoding for Tree to String Translation

Efficient Incremental Decoding for Tree to String Translation

Our training corpus consists of 1.5M sentence pairs with about 38M/32M words in Chinese/English, re- spectively. We first word-align them by GIZA++ and then parse the Chinese sentences using the Berke- ley parser (Petrov and Klein, 2007), then apply the GHKM algorithm (Galley et al., 2004) to ex- tract tree-to-string translation rules. We use SRILM Toolkit (Stolcke, 2002) to train a trigram language model with modified Kneser-Ney smoothing on the target side of training corpus. At decoding time, we again parse the input sentences into trees, and convert them into translation forest by rule pattern- matching (Mi et al., 2008).
Show more

Read more

Noisy Interpolating Sets for Low-Degree Polynomials

Noisy Interpolating Sets for Low-Degree Polynomials

For the case of degree-1 polynomials, it is not hard to see that any noisy interpolating set implies a PRG and vice versa. That is, if G : F s → F n is a PRG for polynomials of degree 1 then the image of G is a noisy interpolating set. Similarly if S ⊆ F n is a noisy interpolating set for degree-1 polynomials then any one to one mapping whose image is S is a PRG for such polynomials. For larger values of d there is no such strong correspondence between PRGs and noisy interpolating sets. As before, given a PRG we get a noisy interpolating set (without an obvious efficient interpolation algorithm), however it is not clear how to get a PRG from a noisy interpolating set. Constructions of PRGs against low degree polynomials over very large fields were given in [5] using tools from algebraic geometry. Independently from our work, Viola [14], building on the works of [6, 9], showed (over any field) that d ·S is PRG for degree-d polynomials whenever S is PRG for degree-1 polynomials. In particular this result implies that d · S is a noisy interpolating set. However, [14] did not consider the problem of giving a noisy interpolating set and in particular did not give an efficient interpolation algorithm. One can view our work as saying that the construction analyzed in [6, 9, 14] with respect to minimal distance also has an efficient decoding algorithm.
Show more

Read more

Receivers and CQI Measures for MIMO-CDMA Systems in Frequency-Selective Channels

Receivers and CQI Measures for MIMO-CDMA Systems in Frequency-Selective Channels

In this paper, we first derive a single CQI measure for the JE systems in frequency-selective channels, in order to avoid the complications of multidimensional mappings. The CQI proposed here is based on a so-called per-Walsh code joint detection structure consisting of a front-end linear fil- ter followed by joint symbol detection among all the streams. We derive a class of filters that maximizes the so-called con- strained mutual information, and show that the conventional LMMSE and MVDR equalizers belong to this class. Similar to the notion of generalized SNR (GSNR) [1], this constrained mutual information provides us with a CQI measure describ- ing the MIMO link quality. Such a CQI measure is essential in providing a simple one-dimensional mapping for both link adaption and generating short-term curves for the purpose of link-to-system mapping for JE schemes. For the case of SE transmission, on the other hand, we extend the successive decoding algorithm of PARC [2, 3] to multipath channels, and show that in this case successive decoding achieves the
Show more

Read more

Decoding Algorithm in Statistical Machine Translation

Decoding Algorithm in Statistical Machine Translation

Table 2: Examples of Correct, Okay, and Incorrect Translations: for each translation, the first line is an input German sentence, the second line is the human made target translation for[r]

Read more

Efficient Staggered Decoding for Sequence Labeling

Efficient Staggered Decoding for Sequence Labeling

This section presents the new decoding algorithm. The key is to reduce the number of labels ex- amined. Our algorithm locates the best label se- quence by iteratively solving labeling problems with a reduced label set. This results in signifi- cant time savings in practice, because each itera- tion becomes much more efficient than solving the original labeling problem. More importantly, our algorithm always obtains the exact solution. This is because the algorithm allows us to check the op- timality of the solution achieved by using only the reduced label set.
Show more

Read more

Decoding Algorithm for Ternary RM Codes

Decoding Algorithm for Ternary RM Codes

Muller first put forward these codes. A decoding algorithm for these codes was devised by Reed. In the decoding algorithm, Majority-logic was used which is based upon the concept of finite geometry. A majority-logic decoding algorithm can decode Finite geometry codes including Euclidean geometry (EG) and Projective geometry (PG) codes [Peterson and Weldon Jr. (1972), Goethals and Delsarte (1968)]. Peterson and Weldon Jr.(1972), Welden Jr. (1969), and Chen (1971, 1972) showed that in a decoder the number of majority-logic gates used can be reduced. Rodolph and Hartmann (1973) showed that complexity of decoder may be reduced to a great extent. But it can be done at the expense of decoding-delay. For first-order binary RM codes, MacWilliams and Sloane (1977) proposed a decoding algorithm. Tokiwa, Sugimura, Namekawa and Kasahara (1982) interpreted binary RM codes in terms of the concept of superimposition and presented new decoding algorithm. They compared their own decoding algorithm with conventional algorithm which is there for cyclic binary RM codes in relation with problem of the decoding-delay.
Show more

Read more

A High Spectral Efficient Non Binary TCM Scheme Based Novel Decoding Algorithm for 4G Systems

A High Spectral Efficient Non Binary TCM Scheme Based Novel Decoding Algorithm for 4G Systems

This paper deals with the MIMO-OFDM technique that is applied to the fourth generation (4G) of the wireless com- munication systems, this technique can provide high data rate transmission without increasing transmit power and ex- panding bandwidth, it can also efficiently use space resources and has a bright future. It presents the channel coding assisted STBC-OFDM systems, and employs the Coded Modulation techniques (CM), since the signal bandwidth avail- able for wireless communications is limited. The proposed system deals with Non-binary error control coding of the TCM-aided STBC-OFDM scheme for transmissions over the Rayleigh channel. A new non-binary decoding method, Yaletharatalhussein decoding algorithm, is designed and implemented for decoding non-binary convolutional codes, which is based on the trellis diagram representing the convolutional encoder. Yaletharatalhussein decoding algorithm outperforms the Viterbi algorithm and other algorithms in its simplicity, very small computational complexity, decoding reliability for high states TCM codes that are suitable for Fourth-Generation (4G), decreasing errors with increasing word length, and easy to implement with real-time applications. The simulation results show that the performance of the non-binary TCM-based Yaletharatalhussein decoding algorithm-assisted STBC-OFDM scheme outperforms the binary and non-binary decoding methods.
Show more

Read more

Non Orthogonal N MSK Modulation for Wireless Relay D2D Communication

Non Orthogonal N MSK Modulation for Wireless Relay D2D Communication

The N-MSK codes tested were transparent, and decoded using the soft-decision Viterbi algorithm. The user velocities were 0 mph for indoor, 4 mph for pedestrian and 70 mph for vehicular. The channel scenarios are easily modified by altering the delay, power profile and Doppler spectra to create virtually any single input–single-output (SISO) environment based on measured data. This gives a much more flexible channel model, which corresponds to actual measured data and produces a time-varying frequency-selective channel that is much more realistic and is essential for testing certain distortion mitigation techniques.
Show more

Read more

Exploring Recombination for Efficient Decoding of Neural Machine Translation

Exploring Recombination for Efficient Decoding of Neural Machine Translation

Intuitively, for efficiency, we do not need to ex- pand all of these partial hypotheses (states) since they have similar future predictions. In fact, this corresponds to the idea of hypothesis recombi- nation (also known as state merging, which will be used interchangeably) from traditional Phrase- Based Statistical Machine Translation (PBSMT) (Koehn et al., 2003). Given a method to find mergeable states, we can employ recombination in NMT decoding as well.

Read more

Efficient encoding and decoding of binaural sound with resonance audio

Efficient encoding and decoding of binaural sound with resonance audio

libraries. Note however, that future algorithms within the Resonance Audio might change, as new methods and DSP algorithms are constantly being developed. It is intended to record such changes as separate doc- uments referencing this paper. Also, note that not all the algorithms used by Resonance Audio have been described here, but only the ones directly pertaining to the Resonance Audio’s spatializer. For example, room effects algorithms (early reflections and late re- verberation) are mostly documented in the code itself. However, some critical parts of the late reverberation algorithm are published in [22]. Also, the near-field effect and directivity control algorithms are intended to be documented separately. Future work should fo- cus on, for example: practical ways of individualiza- tion of SH HRIRs in the Resonance Audio binaural decoder, adding efficient decoding for regular and ir- regular, physical loudspeaker setups, simulation of ad- vanced sound source radiation patterns and API meth- ods for creating volumetric sound sources, as well as further quality (for example, better modelling of spread functions at low frequencies) and efficiency improve- ments to existing algorithms.
Show more

Read more

Design and Implementation of an Universal Lattice Decoder on FPGA

Design and Implementation of an Universal Lattice Decoder on FPGA

The sequential architecture of the original and improved algorithms offer a decoding rate of 1.17 Mbit/s and 4.39 Mbit/s respectively when implemented on a device technology XC2V1000-6FF896 of Xilinx VirtexII-1000 FPGA platform. From the synthesis results we observe that the maximum frequency of the original sphere decoder is higher compared to the improved form of algorithms. Although this is the case, the decoding rate of the improved sphere decoding algorithm is far better and shows a lot of improvement from the original algorithm. This is because of better values of number of clock cycles per iteration, CPIT ni and count of average number of times particular iteration or state is visited, ITC ni for the improved sphere decoding algorithm. They contribute to the better decoding rate the decoder.
Show more

Read more

Efficient Decoding of Turbo Codes with Nonbinary Belief Propagation

Efficient Decoding of Turbo Codes with Nonbinary Belief Propagation

in building a nonbinary Tanner graph of the turbo code using only parity-check nodes defined over a certain finite group, and symbol nodes representing groups of bits. The Tanner graph is obtained by a proper clustering of order p of the binary parity-check matrix of the turbo code, called “binary image.” However, the clustering of the commonly used binary representation of turbo codes appears to be not suitable to build an nonbinary Tanner graph representation that leads to good performance under iterative decoding. Thus, we will show in the paper that there exist some suitable preprocessing functions of the parity-check matrix (first block of Figure 1) for which, after the bit clustering (second block of Figure 1), the corresponding nonbinary Tanner graphs have good topological properties. This preliminary two-round step is necessary to have good Tanner graph representations that outperform the classical representations of turbo codes under iterative decoding. Then, the second step is a BP-based decoding stage (last block in Figure 1) and thus consists in running several iterations of group belief propagation (group BP), as introduced in [11], on the nonbinary Tanner graph. Furthermore, we will show that the decoder can also fully benefit from the decoding diversity that inherently raises from concurrent extended Tanner graph representations, leading to the general concept of decoder diversity. The proposed algorithms show very good performance, as opposed to the binary BP decoder, and serve as a first step to view LDPC and turbo codes within a unified framework from the decoder point of view, that strengthen the idea to handle them with a common approach.
Show more

Read more

Efficient use of a hybrid decoding technique for LDPC codes

Efficient use of a hybrid decoding technique for LDPC codes

Low-density parity-check (LDPC) codes were first intro- duced by Gallager in 1963 [1] and rediscovered by Mackay [2] in late 1990s. LDPC codes are characterized by a sparse parity-check matrix H, for which an iterative decoder becomes an attractive option. One example is the belief propagation (BP) decoder which achieves the optimal MAP (Maximum a Posteriori) decoding condition if the code graph does not contain cycles ([3], p. 211). However, most LDPC codes have cycles in their Tanner graph repre- sentation, mainly for small and medium blocklengths [4], which adversely affect code performance. The BP decod- ing operation is executed for a preset number of iterations, and in case of a decoding failure, it leads to a frame error which implies a number of bits erroneously decoded. In the error-floor region, where LDPC codes exhibit a sud- den saturation in word error rate (WER) for a sufficiently high signal-to-noise ratio (SNR), the bit errors are primar- ily caused by trapping sets ([3], p. 225). A considerable amount of research has gone into designing decoders to mitigate the errors caused by trapping sets [4-6].
Show more

Read more

Show all 10000 documents...