Let us worry about your assignment instead!

We Helped With This MATLAB Programming Assignment: Have A Similar One?

More InfoPay Someone To Do My Matlab Homework

Short Assignment Requirements

All due respect,I have a paper that need to write the code and GUI MATLAB. It's abt Image medical Compression and decompression (WAVELET TRANSFORM+Kohonen's net+HUFFMAN CODING) Can you help me about that?

Assignment Image

MATLAB Assignment Description Image [Solution]
MATLAB R2015b HOME New Ope Current Fold Nam untitl untitl temp image image image ▬▬▬▬▬▬▬ image image image image image 1110d 1110d 1110_ 1110 Details Workspace Name Ready PLOTS LaFind Files untitled20 Compress Level Wavlet Type Reconstruct Plot Level Original Size Select Image APPS 4 Reconstructed Size haar 1 1 Compress 262144 Reconstruct 262144 EDITOR PUBLISH Insert fx ₁ - W Select Image Please select Image first before press reconstruct epochs Block Size Save Compressed Compressed Size Save Reconstructed VIEW Neural Network Training (nntraintool) 10 4 170660 Clear Clear Exit Original Image O Compress Done Reconstructed x3 2 7 Levels Approx Coef 1 Vertical Coef 1 Results untitledzo CR MSE rwMSE PSNR rwPSNR ? Search Documentation Horizontal Coef 1 34.8984 0.96627 ES Diagonal Coef1 1.0844 □ 48.2798 47.7789 ▲ x 10 X LIT TU 8 Cor ZU 12:40 p.m. 15/03/2017 0 X

Assignment Description

Theoretical Improvement of the Image Compression Method Based

on Wavelet Transform

Mourad Rahali1,2, Habiba Loukil1, Mohamed Salim Bouhlel1


Sciences and Technologies of Image and Telecommunications

High Institute of Biotechnology, University of Sfax, Tunisia

2 National Engineering School of Gabes, University of Gabes, Tunisia

{..., ..., ...}


Image compression was performed by several techniques for example: JPEG and JPEG2000 are lossy compression methods. These methods performing scalar quantization on the values obtained after transformation. The disadvantage of the scalar quantization is it does not allow exploiting the spatial correlation between pixels in the image. To improve the compression, we quantified together of values simultaneously it is definition of the vector quantization. In this paper, we studied and modeled an approach to images compression by wavelet transform and Kohonen network. We show the role of null moments in wavelet for improve the compression and we calculate the compression ratio based on compression parameters.

KeywordsCompression ratio, moments null, neural network, wavelet transform.

1. Introduction 

The basic idea of image compression is to reduce the average number of bits per pixel necessary for their representation. Our study of image compression based on the lossy compression method [1]. Meaning that the reconstructed image after a compression and decompression cycle will be different from the original image. This difference causes degradation of the original image. Direct compression methods of image by neural networks give acceptable results. These methods have a limit in the compression ratio and the quality of reconstructed images. To improve image quality rebuild, we combined the discrete wavelet transform [2] and quantification by Kohonen networks [3] then we used the Huffman code for encoding the quantized values. The Kohonen network is a lossy compression method and classified among unsupervised neural networks. On another level, the wavelet transformation can produce different importance of subimages and identify the relevant information of the details of an image.  In this paper, we show the effect of nulls moments in wavelet for image compression and the increasing of the compression ratio in wavelet have many nulls moments. In the second stage we express the compression ratio by the parameters of compression and we compare by value practical value.

2. Image compression approach 

Our approach is divided into several steps. First, applying the wavelet transformation on the original image according to a decomposition level (1, 2, 3, 4). In the second step, decompose the three sub-images into blocks according to a block size (2x2, 4x4, 8x8, 16x16) and keep an approximated image. In the three step, search the codebook for each block and the code word with a minimum distance from the block. The index of the selected word is added to the index vector that represents the compressed image. Finally, we code the index vector by a Huffman coding [4].

Fig. 1 Image compression steps

2.1. Kohonen’s Network Algorithm

Kohonen’s network algorithm [3] [4] follows these steps: Find the winning neuron of the competition d X w( , c ) d X w( , i ),∀ ≠i c                (1)

wc where, X is input vector, is weight vector of the winning


      neuron c and      is weight vector of the neuron i

Update weight w i

                      w ti ( + =1) w ti ( )+h c i t( , , )*[X w ti ( )]    (2)


w i

where, is the weight vector of the neuron i in instant t and h is a function defined by:

h c i t( , , )

                                    0, else if                                                    


The function h defined the extent of the correction to the winning neuron c and its neighborhood.

In instant t, the neighbors of winning neuron c are determined by the function N(c t, ) . The final neighbors of a neuron consist of the neuron itself. The function h c i t( , , ) assigns the same correction  α( )t for all neurons belonging to the neighbors of the winning neuron at instant t.

2.2. Image Pretreatment with Wavelet Transform

Wavelet transform decomposes [5] an image into a set of different resolution sub-images, corresponding to the various frequency bands. Wavelets are a class of functions used to localize a given signal in both space and scaling domains. Wavelets automatically adapt to both the highfrequency and the low frequency components of a signal by different sizes of windows. Wavelets are functions generated from one single function , which is called mother wavelet, by dilations a and translations b [5].

                           ψa b, ( )x = 1 ψ   x ab                               (4)

where  must satisfy the following conditions.





ψ( )x 2 dx =1



Wavelet transform is the representation of any arbitrary signal x(t) as a decomposition of the wavelet basis or write

x(t) as an integral over a and b of ψa b, . In this work Discrete Wavelet Transform (DWT) is used. It is the discretized version of the continuous wavelet transforms as defined by (6), for efficient computer implementation.

DWT of signal x(t) is defined by the equation:

                         x t( ) =cmn m n, ψ , ( )t                                       (7)



                        cm n, x t                              m n,            t dt          (8)


cm n, characterizes the projection of x(t)

The coefficients

ψm n, ( )t . DWT is implemented onto the base formed by

using the sub-band coding method. The whole sub-band process consists of a filter bank (a series of filters), and filters of different cut-off frequencies, used to analyze the signal at different scales. The procedures starts by passing the signal through a half band high-pass filter and a half band low-pass filter. The filtered signal is then down sampled. Then the resultant signal is processed in the same way as above. This process will produce sets of wavelet transform coefficients that can be used to reconstruct the signal.

3. Theoretical study 

3.1.Choice of wavelet transform

The most important element in wavelet is the number of nulls moments [6]. All wavelet must have at least one null moment. For most applications, it is desirable to have more than zero coefficients of wavelet. So, more nulls moments imply a better transformation. 

The number of nulls moments determines the decay speed of the coefficients according to the frequency axis (inverse of scale).

We will establish the link between the numbers of nulls moments of the mother wavelet and the decay speed of the wavelet coefficients based on the resolution.

It is said that ψ have “p” nulls moments if :

+∞ i

                                                         ,                                              (9)

    Consider the Taylor expansion of the function to analyze f ( )t around a point u. The wavelet analysis is "moving" along the function with the translation parameter, it will ask u n= 2 j Assuming f ( )t p times derivable.

We have:

                                k p= −1 f (k ) ( )u             k         f ( p ) ( )c

                   f ( )t =          (t u ) (t u )p

                                 k =1           k !                           p !                         (10)

= −

= k p 1 f ( )k ( )u (t u)k + error t( )

                              k=1             k !                                                       (11)

This is to analyze the wavelet coefficients given by: 

                             < f ( )t ,                    ψ t 2 j n   >                (12)

       =< k= −k=p11 f ( )kk!( )u (t u) + p! (t u) ,                                ψ  t 2ju   >  (13)

=< k= −p 1 f ( )k ( )u k ψ t u  >

(t u) ,    j  k =1 k!   2  

                                + < f ( pp) !( )c (t u) p , ψ    t 2ju   > (14)


it will ask :


                    X =< f ( )p ( )c (t u) p ,         1 ψ t ju  >


                                   p!                       [1]      2            (15)


it will ask :




y = t ju


  dy = 1j dt                       (19)


< f t( ), ψ t 2 j n   >                      (16)

k= −p 1

                 k=1           k!                          2      (17)

             k=1              k! −∞                    2

       k p         +∞

             k= −p 1     ( )k

         =                      j k            j         ykψ( )y dy + X

               k=1            k!                       −∞                                      (22)



 y kψ ( )y dy = 0



  X = < f ( pp)!( )c (t u) p ,                                    12 ψ      t2ju   >           (23)

               f ( )pp!( )c                               p ψ       t2ju  >              (24)

function f, the wavelet coefficients will be small, and this is especially true for very localized wavelet (fine scales). Then more nulls moments imply a better transformation.

3.2.Development of a new compression ratio

The compression ratio is an evaluation criterion of compression algorithms. The compression ratio is defined as: 

                                                        k '

CR = −1            *100    (26)         k                                   

K is number of bits per pixel in the compressed image. 

K is number of bits per pixel in the original image.

To be taken into account the wavelet parameters and to be intervene where change of these parameters, we will develop a new formula to calculate the compression ratio.

We expressed the compression ratio according to compression parameters. The compression ratio becomes :

' compressed image size k =    , number of pixel

with, number of pixel = m*n 

So k ' = approximation image size + index vector size (27)

m n*


m n*

approximation image size =   j     *k , 2

j : is decomposition level of wavelet, k: number of pixel for coding one pixel. 

n m* n m*j

index vector size =       2 2 *Huff       (28) BS              

BS2 : is block size and Huff = L Pi                                         i , with: Li :

Length of i Huffman code, and Pi : Probability of occurrence ofi Huffman code 

                    0<Pi 1, 1≤ ≤Li k and 1≤ ≤i    BS 2

                                 P k BS2


(m n* ) m n*

                       m n**k+                2 2 j           *Li Pi


k ' = 2                      BS                            (29) m n*          

                 k ' = m *n*( *k  BS 2 + (2j   j 1)*2  L Pi  i )               (30)

                                          m *n*2 * BS                                


                                  k BS*    2 + (2 j 1)* L Pi      i

                 CR = −                         1 k *2 *j BS2              *100     (31)

4. Experiments and results 

The following curves correspond to tests (theory and practice) on the image medical.bmp. 

Fig. 2 Medical.bmp

Fig. 3 shows the variation of compression ratio depending on the decomposition level (j) with size of selforganization map (SOM) equal 4 and block size equal 2.

Fig. 3 Compression ratio based on decomposition level

According to Fig. 3, we deduce that the compression ratio is proportional to the decomposition level. Higher the decomposition level increases so higher the compression ratio is better. The compression ratio increased because the image resolution decreased introduces fewer bits to encode a

pixel. Fig. 4 shows the variation of the compression ratio depending on the size of SOM with decomposition level equal 3 and the block size equal 4.

Fig. 4 Compression ratio based on size of SOM

According of Fig. 4, we deduce that the compression ratio is inversely proportional to size of SOM. Higher the size of SOM increased so higher the compression ratio is decreased. The decay of compression ratio because to the large number of bits per pixel. Fig. 5 shows the variation of the compression ratio depending on the block width size decomposition level equal 3 and the size of SOM equal 16.

Fig. 5Compression ratio based on block width


According to Fig. 5, we deduce that the compression ratio is proportional to the block size. The growth compression ratio because to the decrease in the number of blocks. The difference between the theoretical and practical compression ratio is caused by: the choice of codebook, the wavelet type and the learning image.

The evaluation of our approach in image compression was performed using the following measures the peak signal to noise ratio (PSNR), and relative weighted the peak signal to noise ratio (rwPSNR) [7] defined as: 

                               PSNR=10*log10 (2MSEn 1)2                     (32)



                                    1      M     N                                                   2

        MSE = M *N i=1 j=1(x i j( , )y i j( , ))    (33)



               rwPSNR=10*log10 rwMSEx2max        ( 34)




               rwMSE = 1 Mm− −0 01 1Nn 2* (xy) (/( x+ y)) 2

                                      MN = =            1+Var M N,                (35)

Let X = {xij| I = 1,..,M; j=1,..,N}and Y = { yij| I=1,..,M


;j=1,.., N} be the original image and the test image, respectively and Var (M,N) is the test image variance in the other hand.

We compare the effect of wavelet by the PSNR and rwPSNR depending on the number of bit per pixel (Nbpp).

Fig. 6 Comparison of wavelets type for quality reconstructed image

We notice that the wavelet Haar “haar” give good result of image compression quality compared by Coiflets  “coif”, Daubechies  “db” and Symlets “sym”.

5. Conclusion

The interest of this work is the theoretically study an of image compression approach using wavelet transforms and

Kohonen’s network. We show the effect of nulls moments in wavelet for image compression and we find new formula to express approximately the compression ratio based on its parameters. To improve our study, we show the comparison of four wavelets according image quality metric PSNR and rwPSNR depending on the number of bit per pixel so we find that the haar wavelet is better.

6. References

[1]     M. K. Mathur, S. Loonker and D. Saxena“Lossless Huffman Coding Technique For Image Compression And Reconstruction Using Binary Trees”, IJCTA, Vol 3, Pages 76-79, January 2010.

[2]     P. Raghuwanshi and A. Jain “A Review of Image Compression based on Wavelet Transform Function and Structure optimization Technique”, International Journal Computer Technology and Applications, Vol 4, pages 527532, June 2013.

[3]     T. Kohonen, “The self-organizing map”, proceeding of the IEEE, Vol 78, N° 9, September 1990.

[4]     D.A. Huffman, “A Method for the Construction of Minimum-Redundancy Codes”, Proceedings of the IRE. pp 1098-1101, September 1952.

[5]     G. Boopathi, and S. Arockiasamy, “An Image Compression Approach using Wavelet Transform and Modified Self Organizing Map”, International Journal of Computer Science Issues, Vol. 8, N° 2, September 2011.

[6]     Munteanu, J. Cornelis, G. V. Auwera and P. Cristea “Wavelet Image Compression The Quadtree Coding Approach”, IEEE Transaction on Information Technology In Biomedicine, Vol 3, N° 3 September 1999.

[7]     H. Loukil, M. H. Kacem and M. S. Bouhlel, “A New Image Quality Metric Using System Visual Human Characteristics”,

International Journal of Computer Applications, Vol 60, N° 6, December 2012.


[1]                                    (25)

with j: the decomposition level, p : numbers of nulls moments and M : numbers of coefficients. For a very regular

Frequently Asked Questions

Is it free to get my assignment evaluated?

Yes. No hidden fees. You pay for the solution only, and all the explanations about how to run it are included in the price. It takes up to 24 hours to get a quote from an expert. In some cases, we can help you faster if an expert is available, but you should always order in advance to avoid the risks. You can place a new order here.

How much does it cost?

The cost depends on many factors: how far away the deadline is, how hard/big the task is, if it is code only or a report, etc. We try to give rough estimates here, but it is just for orientation (in USD):

Regular homework$20 - $150
Advanced homework$100 - $300
Group project or a report$200 - $500
Mid-term or final project$200 - $800
Live exam help$100 - $300
Full thesis$1000 - $3000

How do I pay?

Credit card or PayPal. You don't need to create/have a Payal account in order to pay by a credit card. Paypal offers you "buyer's protection" in case of any issues.

Why do I need to pay in advance?

We have no way to request money after we send you the solution. PayPal works as a middleman, which protects you in case of any disputes, so you should feel safe paying using PayPal.

Do you do essays?

No, unless it is a data analysis essay or report. This is because essays are very personal and it is easy to see when they are written by another person. This is not the case with math and programming.

Why there are no discounts?

It is because we don't want to lie - in such services no discount can be set in advance because we set the price knowing that there is a discount. For example, if we wanted to ask for $100, we could tell that the price is $200 and because you are special, we can do a 50% discount. It is the way all scam websites operate. We set honest prices instead, so there is no need for fake discounts.

Do you do live tutoring?

No, it is simply not how we operate. How often do you meet a great programmer who is also a great speaker? Rarely. It is why we encourage our experts to write down explanations instead of having a live call. It is often enough to get you started - analyzing and running the solutions is a big part of learning.

What happens if I am not satisfied with the solution?

Another expert will review the task, and if your claim is reasonable - we refund the payment and often block the freelancer from our platform. Because we are so harsh with our experts - the ones working with us are very trustworthy to deliver high-quality assignment solutions on time.

Customer Feedback

"Thanks for explanations after the assignment was already completed... Emily is such a nice tutor! "

Order #13073

Find Us On

soc fb soc insta

Paypal supported