Week for documentation.

Decoding functions for linear codes were changed to decoder.py in sage.coding

Method 'decode' from linear_code.py was modified so now the new decoding algorithms are supported.

Examples and documentation were added to each function.

https://bitbucket.org/vsuaste/coding-theory/commits/21fdf54936c34a51434ae8c93af9a708987c8820

https://bitbucket.org/vsuaste/coding-theory/commits/b710a5169e67221344f11956c0226d1a1fc97bd7

https://bitbucket.org/vsuaste/coding-theory/commits/827bf6bdb252a5771129c62d96a5668bf4a90d18

https://bitbucket.org/vsuaste/coding-theory/commits/f714bc2d14a18c02d80a3add36742d39a2d6fa30

Also I'm still working in a function that it could be added. This is about minimal support words of the code. Here some explanation: Theory

....Getting ready the final patch with alphabetical ordered functions.

# Programming Projects

## sábado, 21 de septiembre de 2013

### GSoC 9th - 14th September

For this week the work was to implement the generalized function for computing the coset leaders

of the code. Now works with codes over any finite field.

With this function now covering_radius and newton_radius works for any finite field

as well.

Considering that for computing covering_radius is only needed one coset leader of each coset, I have added a parameter to coset leaders function which controls if it has to compute all coset leaders or computes only one coset leader for each coset.

Here is the code already documented: Code

And here some examples: theory, examples

of the code. Now works with codes over any finite field.

With this function now covering_radius and newton_radius works for any finite field

as well.

Considering that for computing covering_radius is only needed one coset leader of each coset, I have added a parameter to coset leaders function which controls if it has to compute all coset leaders or computes only one coset leader for each coset.

Here is the code already documented: Code

And here some examples: theory, examples

## domingo, 8 de septiembre de 2013

### GSoC 1st - 8th September

This week a new optimization for the FGLM adapted algorithm has been added.

As expected, this new algoritm is faster for computing the grobner basis of the ideal associated to the linear code in the way we need it for the decoding algorithm.

Here is part of the comparisson of the groebner basis computation.

In order to make this comparisson was necessary implement the function for creating the ideal associated to the code. Code

Documentation of the FGLM adapted algorithm for linear codes over finite field in the general case and all other functions, including the new decoding algorithm has been documented a tested. The link to the repository:

https://bitbucket.org/vsuaste/coding-theory/commits/e3dc1a07f7d304700053981596493646ca06428f

Here some examples in which the new decoding algorithm is faster than syndrome decoding of sage:

As expected, this new algoritm is faster for computing the grobner basis of the ideal associated to the linear code in the way we need it for the decoding algorithm.

Here is part of the comparisson of the groebner basis computation.

sage: C = RandomLinearCode(9,3,GF(4,'a'))

sage: I = C.create_ideal()

sage: %time gB = I.groebner_basis('libsingular:stdfglm')

CPU times: user 370.84 s, sys: 2.36 s, total: 373.19 s

Wall time: 375.74 s

sage: %time gb=C.groebner_basis_fglm() #FGLM adapted algorithm

CPU times: user 16.36 s, sys: 0.10 s, total: 16.45 s

Wall time: 16.78 s

sage: C = RandomLinearCode(10,3,GF(5))

sage: %time gb = C.groebner_basis_fglm()#FGLM adapted algorithm

CPU times: user 581.37 s, sys: 2.21 s, total: 583.58 s

Wall time: 590.37 s

sage: I = C.create_ideal()

sage: %time gB = I.groebner_basis()

CPU times: user 1331.38 s, sys: 1.14 s, total: 1332.53 s

Wall time: 1336.74 s

In order to make this comparisson was necessary implement the function for creating the ideal associated to the code. Code

Documentation of the FGLM adapted algorithm for linear codes over finite field in the general case and all other functions, including the new decoding algorithm has been documented a tested. The link to the repository:

https://bitbucket.org/vsuaste/coding-theory/commits/e3dc1a07f7d304700053981596493646ca06428f

Here some examples in which the new decoding algorithm is faster than syndrome decoding of sage:

sage: C = HammingCode(2,GF(7))

sage: C

Linear code of length 8, dimension 6 over Finite Field of size 7

sage: v = random_vector(GF(7),C.length())

sage: %time C.decode(v) #syndrome algorithm

CPU times: user 2.10 s, sys: 0.03 s, total: 2.13 s

Wall time: 2.19 s

(0, 4, 5, 4, 1, 2, 5, 4)

sage: %time C.decode_fq(v) #new decoding algorithm

CPU times: user 0.81 s, sys: 0.02 s, total: 0.83 s

Wall time: 0.87 s

(0, 5, 5, 1, 4, 1, 4, 6)

sage: C = HammingCode(2,GF(11))

sage: v = random_vector(GF(11), C.length())

sage: v

(0, 10, 0, 8, 10, 0, 2, 5, 2, 10, 10, 4)

sage: %time C.decode_fq(v) # new decoding algorithm

CPU times: user 53.32 s, sys: 1.45 s, total: 54.76 s

Wall time: 56.19 s

(0, 1, 0, 10, 8, 0, 0, 2, 3, 8, 8, 5)

sage: %time C.decode(v) ###syndrome algorithm still running after hours.....

sage: C = HammingCode(3,GF(4,'a'))

sage: v = random_vector(GF(4,'a'),C.length())

sage: v

(a + 1, a + 1, 0, a + 1, 0, 1, a + 1, a + 1, a + 1, a + 1, 0, 1, a, 1, a, 0, a + 1, 0, a, a + 1, 0)

sage: %time C.decode_fq(v) #new decoding algorithm

CPU times: user 81.18 s, sys: 1.36 s, total: 82.55 s

Wall time: 84.30 s

(0, 0, 0, a + 1, 0, 1, 0, 0, 0, 0, 0, 0, a, 1, 1, 0, a + 1, 0, a, a + 1, 1)

sage: %time C.decode(v) ###syndrome algorithm still running after hours.....

## domingo, 1 de septiembre de 2013

### GSoC Aug 26th - 31th

For this week the work was to implement a fglm alternative algorithm for computing grobner basis of the ideal associated to a linear code over a finite field. So far I had worked only with the binary case.

It's well known that the problem of complete decoding for a linear code is a NP-hard computational, so for codes of long length or over finite fields with big cardinality the time for decoding is big.(grows exponentially)

Still I'm working in the optimization of the code to reduce as much as possible the time of decoding.

One problem I'm facing is how to generate (in the wisest way) all vectors over the finite field of given weight and certain length. Permutations and combinations in general are operations that take long times plus the fact that the number of vectors grows exponentially with the length of the code.

Here the code. Still missing documentation and details for presentation: Code

Here an example:

It's well known that the problem of complete decoding for a linear code is a NP-hard computational, so for codes of long length or over finite fields with big cardinality the time for decoding is big.(grows exponentially)

Still I'm working in the optimization of the code to reduce as much as possible the time of decoding.

One problem I'm facing is how to generate (in the wisest way) all vectors over the finite field of given weight and certain length. Permutations and combinations in general are operations that take long times plus the fact that the number of vectors grows exponentially with the length of the code.

Here the code. Still missing documentation and details for presentation: Code

Here an example:

sage: C = RandomLinearCode(6,3,GF(4,'a'))

sage: v = random_vector(GF(4,'a'),C.length())

sage: v

(0, a, a, a + 1, a, a + 1)

#time it takes with syndrome algorithm vs fglm algorithm

sage: %timeit C.decode(v) #syndrome algorithm

1000 loops, best of 3: 709 us per loop

sage: %timeit C.decode_fq(v) #using my implemented fglm

1 loops, best of 3: 609 ms per loop

#solutions of syndrome and fglm algorithm are different

sage: d = C.decode(v)

sage: d

(0, 1, 0, a + 1, a, a + 1)

sage: d1 = C.decode_fq(v)

sage: d1

(0, 1, a + 1, a + 1, a + 1, a + 1)

#check that both d and d1 belong to the same coset

sage: y1 = v-d

sage: y2 = v-d1

sage: H = C.check_mat()

sage: H*y1

(0, a + 1, a)

sage: H*y2

(0, a + 1, a)

## domingo, 25 de agosto de 2013

### GSoC Aug 18th - 25th

For this week I have implemented a new decoding algorithm for binary linear codes using Grobner

Basis of the ideal associated to the code.

In the binary case I have decided to use the FGLM algorithm from singular to compute the Grobner

Basis, this algorithm is very fast. So first I have to construct the ideal associated to the code.

Then, I create a test-set for the code. And use this one to apply the descent decoding algorithm.

More details about this algorithm : Theory

The process of computing the grobner basis and the test-set is only executed once, and after that, the decoding process is very fast.(Decoding algorithms with pre-computed information)

I have done some tests to analyze the time it takes with the different decoding algorithms and differents types of binary linear codes:

Results of experiments: Testing Algorithm

Also I have documented the functions with examples and more. The code of this algorithm and functions is in the commit:

https://bitbucket.org/vsuaste/coding-theory/commits/4d5c8255eaec112b191eb095873eaf09e5708ed4

With this algorithm I finished algorithms for binary codes. So from now, I have started trying to generalize the algorithms for the case of codes over any finite field.

In this case we expect to gain some speed with a new version of FGLM algorithm implemented specificaly for general case of codes.

Basis of the ideal associated to the code.

In the binary case I have decided to use the FGLM algorithm from singular to compute the Grobner

Basis, this algorithm is very fast. So first I have to construct the ideal associated to the code.

Then, I create a test-set for the code. And use this one to apply the descent decoding algorithm.

More details about this algorithm : Theory

The process of computing the grobner basis and the test-set is only executed once, and after that, the decoding process is very fast.(Decoding algorithms with pre-computed information)

I have done some tests to analyze the time it takes with the different decoding algorithms and differents types of binary linear codes:

Results of experiments: Testing Algorithm

Also I have documented the functions with examples and more. The code of this algorithm and functions is in the commit:

https://bitbucket.org/vsuaste/coding-theory/commits/4d5c8255eaec112b191eb095873eaf09e5708ed4

With this algorithm I finished algorithms for binary codes. So from now, I have started trying to generalize the algorithms for the case of codes over any finite field.

In this case we expect to gain some speed with a new version of FGLM algorithm implemented specificaly for general case of codes.

## sábado, 17 de agosto de 2013

### GSoC Aug 12th -16th

For this week I have fixed some details about documentation:

https://bitbucket.org/vsuaste/coding-theory/commits/6b81bda12ea08dbb5fd11d35f786f240#comment-379372

I have implemented a new function for computing grobner basis for a binary linear Code.

I noticed that my results and the results using package 4ti2 are not the same, and this is because of the order of the variables. 4ti2 consider xn> ... >x0 and the order I use is x0>x1>x2...

In sage you can compute the grobner basis of an ideal, but you can specify the order of the variables. By default it uses x0>x1>x2>...

Fortunately this detail doesn't matter for our purposes(decoding algorithm) as long as the monomial order is a degree order and therefore a weight compatible order.

Here it goes an Example

And code: Code

(Once I finish the documentation for this one I'll uploaded to bitbucket, also the output format will be changed because of the way I'll use it for the decoding algorithm using grobner basis)

https://bitbucket.org/vsuaste/coding-theory/commits/6b81bda12ea08dbb5fd11d35f786f240#comment-379372

I have implemented a new function for computing grobner basis for a binary linear Code.

I noticed that my results and the results using package 4ti2 are not the same, and this is because of the order of the variables. 4ti2 consider xn> ... >x0 and the order I use is x0>x1>x2...

In sage you can compute the grobner basis of an ideal, but you can specify the order of the variables. By default it uses x0>x1>x2>...

Fortunately this detail doesn't matter for our purposes(decoding algorithm) as long as the monomial order is a degree order and therefore a weight compatible order.

Here it goes an Example

And code: Code

(Once I finish the documentation for this one I'll uploaded to bitbucket, also the output format will be changed because of the way I'll use it for the decoding algorithm using grobner basis)

## miércoles, 31 de julio de 2013

### GSoC Results of GDDA new decoging algorithm

As part of the evaluation for GSoC project I present the results obtained so far.

I have implemented a new decoding algorithm GDDA (gradient descent decoding algorithm) based on the grobner representation of a code.

The idea of this algorithm is that once computed the grobner representation the decoding step is pretty straight and it doesn't takes time. That's why I save the grobner representation as attribute of the code, so you only have to compute it once.

I also modified the format of the output for grobner_representation method. Now returns a dictionary representing Matphi function instead of a List.

GDDA and Grobner representation code

Here I leave you the comparison I've made of the GDDA with decoding algorithms already implemented in Sage such as "sydrome", "nearest neighbor" and guava.

The results are very interesting. I've tried with Hamming Codes, Extended Golay Code and random linear codes.

In the case with Hamming Code the GDDA resulted to be faster than "syndrome" ,"nearest neighbor" and "guava "algorithms, even considering the time it takes to compute grobner representation.

In case with random linear code GDDA resulted to be fasted that "syndrome" and "nearest neighbor" again even considering the time it takes to GDDA computing grobner representation.

With Extended Golay Code "syndrome" and "nearest neighbot" resulted to be faster than GDDA.

GDDA Comparison

I have implemented a new decoding algorithm GDDA (gradient descent decoding algorithm) based on the grobner representation of a code.

The idea of this algorithm is that once computed the grobner representation the decoding step is pretty straight and it doesn't takes time. That's why I save the grobner representation as attribute of the code, so you only have to compute it once.

I also modified the format of the output for grobner_representation method. Now returns a dictionary representing Matphi function instead of a List.

GDDA and Grobner representation code

Here I leave you the comparison I've made of the GDDA with decoding algorithms already implemented in Sage such as "sydrome", "nearest neighbor" and guava.

The results are very interesting. I've tried with Hamming Codes, Extended Golay Code and random linear codes.

In the case with Hamming Code the GDDA resulted to be faster than "syndrome" ,"nearest neighbor" and "guava "algorithms, even considering the time it takes to compute grobner representation.

In case with random linear code GDDA resulted to be fasted that "syndrome" and "nearest neighbor" again even considering the time it takes to GDDA computing grobner representation.

With Extended Golay Code "syndrome" and "nearest neighbot" resulted to be faster than GDDA.

GDDA Comparison

Suscribirse a:
Entradas (Atom)