FRACTAL DIMENSION ESTIMATION IN DIAGNOSING ALZHEIMER ’ S DISEASE

Estimated entropies from a limited data set are always biased. Consequently, it is not a trivial task to calculate the entropy in real tasks. In this paper, we used a generalized definition of entropy to evaluate the Hartley, Shannon, and Collision entropies. Moreover, we applied the Miller and Harris estimations of Shannon entropy, which are well known bias approaches based on Taylor series. Finally, these estimates were improved by Bayesian estimation of individual probabilities. These methods were tested and used for recognizing Alzheimer’s disease, using the relationship between entropy and the fractal dimension to obtain fractal dimensions of 3D brain scans.


Introduction
( The very popular boxcounting method [1] is based on the generalization of (1) to the form ln N(a) = A 0 − D 0 ln a and its application to the boundary of any set F ⊂ R d .
As will be shown in the next section, the quantity ln N(a) is an estimate of the Hartley entropy.

Rényi Entropy
Using a natural logarithm instead of a binary logarithm, we can follow in the definition of Rényi entropy.
Let k ∈ N be number of events, p j > 0 be their probabilities for j = 1, . . ., k satisfying k j=1 p j = 1, and q ∈ R. We can define Rényi entropy [2] as which is a generalization of Shannon entropy.In respect of q, we obtain the specific entropies: • Hartley entropy [3] for q = 0 as • Shannon entropy [4] for q → 1 as • Collision entropy [2] for q = 2 as The resulting theoretical entropies can be used for defining the Rényi dimension [2] as which corresponds to the relationship for small covering size a > 0.

Entropy Estimates
There are several approaches to entropy estimation from experimental data sets.Assuming that the number of experiments n ∈ N is finite, we can count the events and obtain n j ∈ N 0 as the event frequencies for j = 1, . . ., k.The first approach to entropy estimation is naive estimation.We directly estimate k and p j as These biased estimates also produce biased entropy estimates The second approach is based on Bayesian estimation of probabilities p j as This technique is called here semi-Bayesian estimation.
We obtain other, but also biased, entropy estimates The estimate H 2,S can be improved as where A direct Bayesian estimate of H 1 was also calculated as where ψ is the digamma function.

Bias Reduction
Miller [5] modified the naive estimate H 1,N using first order Taylor expansion, which produces Lately, Harris [5] improved the formula to From the theoretical point of view, it is prohibited to estimate p j by its estimates.However we are trying to investigate biased estimates of H 1 in the forms where r j = n+k N −1 nj is Bayesian estimate of 1 pj .

Estimation Methodology
Naive, semi-Bayesian, Bayesian and corrected entropy estimates were subjected of testing on 2D and 3D structures with known Hausdorff dimension.The list of involved estimates is included in Tab. 1.A Sierpinski carpet with D q = 1.8928 for any q ≥ 0 of size 81×81 is a typical 2D fractal set model.Using the estimates from Tab. 1 and a linear regression model (2), we estimated the Rényi dimensions Dq and then evaluated its z score as a relative measure of bias The results are included in Tab.

Alzheimer's Disease Diagnosis from Fractal Dimension Estimates
Alzheimer's disease (AD) is the most common form of dementia, and is characterised by loss of neurons and their synapses.This loss is caused by an accumulation of amyloid plaques between nerve cells in the brain.Morphologically, the affected areas produce rounded clusters of destroyed brain cells, which are visible on brain scans.On the other hand, Amyotrophic lateral sclerosis (ALS) is a disease of the motor neurons, and it is not visible on brain scans.In this sense, brain scans of ALS patien's look like brain scans of healthy patients.These entropy estimators were used for diagnosing Alzheimer's disease.We tried to separate two different groups of samples of human brains.In the first group, there were brain scans of patients with Alzheimer's disease (AD) and in the second group brain scans of patients with amyotrophic lateral sclerosis (ALS).We carried out tests on 21 samples (11 for AD and 10 for ALS), represented by 128 × 128 × 128 matrices of thresholded images (θ = 40 %).We used a twosample t-test for null hypotheses, and the alternative

Estimate
Sierpinski carpet D q = 1.8928Five Box Fractal D q = 2.3219 hypotheses were H 0 : E Dq (AD) = E Dq (ALS), The results are included in Tab. 3. The most significant differences between AD and ALS were observed for H 0,N , H 1,S , H 1,B .

Conclusion
In this paper we tested estimates for Hartley, Shannon and Collision entropy.These estimates were improved by Bayesian estimation and tested on fractals with known fractal dimensions.Finally, these estimates were used on two groups of samples of brain scans, in order to obtain the best separator.The best separators, with regard to the experiment, are H 0,N , H which was worst.On hte basis of these results, entropy can be used for diagnosing Alzheimer's disease in the future, considering that methods can be still improved, especially by estimating k N or by image filtering.
Before explaning the relationship between entropy and dimension, we have to introduce the term of dimension.Let d ∈ N be a dimension of Euclidean space where a d-dimensional unit hypercube is placed.Let m ∈ N be resolution and a = 1/m be edge the length of covering hypercubes of the same dimension d.The number of covering elements is given byN = N(a) = a −D .Knowledge of N for fixed a enables direct calculation of the hypercube dimension according to ln N(a) = −D ln a

Table 2 .
Dimension estimates via various entropy estimates.
1,S , H 1,B , and they have a 2 % level of significance.The rest of the estimates also have results under a 5 % level of significance, except for H 2,N ,