Detection of Keratoconus Disease

. It has described that the development of an application which is used to detect Kera-toconus disease is based on the photographs of the eyes. At the heart of the application is an algorithm that acquires the features of an eye, then classify the aﬀected and healthy eye. Firstly, It has proposed a basic identiﬁcation of keratoconus disease using supervised learning. Described method obtains better results in comparison with other existing methods with the same dataset. The algorithm which used in this method is trained against several samples of healthy and aﬀected eyes and then used to classify unknown sample of eyes. This algorithm analyzes the eye’s corneal curvature and topography using a convolutional neural network (CNN), capable of extracting and understanding an eye’s features. This algorithm is very successful in properly classifying images contained in the training library. The aim of this study is to characterize another arrangement technique to recognize keratoconus based on statistical analysis, and to understand the forecast with insightful systems of these organized facts. The purpose of this study is to deﬁne another grouping strategy to diﬀerentiate keratoconus based on statistical analysis and to understand the expectations.


Introduction
Keratoconus is a condition that makes the cornea flimsy and lump out like a cone over time. The cornea is the clear window in the front of the eye. At the point when the state of the cornea changes, your eye can't center too and vision gets foggy and contorted. Its population occurrence is 500 in million. Keratoconus is viewed as a visual incapacity and requires clinical treatment. In the beginning periods of the sickness, it might be treated with contact focal points, yet in the most extreme cases a corneal transplant can be vital. The pathological mechanisms of this situation have been explored for quite a while. As of late, this infection has gone to the consideration of many research places on the grounds that the quantity of individuals determined to have keratoconus is on the ascent. Right now, that encourages both the symptomatic and treatment choices are immediately required. The existing mechanisms or devices used for detecting keratoconus disease are very expensive and not available in rural areas. Though these devices have accuracy yet it is not useful for everyone.
The exponential development of advanced cameras and smart phones has made another chance to de- * e-mail: shitesh039@gmail.com * * e-mail: ankit.kadam9454@gmail.com * * * e-mail: prasadbhoir6786@gmail.com * * * * e-mail: bharti.joshi@rait.ac.in vise a picture-based device for identifying the illness as ahead of schedule as could be expected under the circumstances.Proposed system examines the corneal topography and curvature and flow of an eye utilizing a convolutional neural network (CNN) that can separate and become familiar with the highlights of eyes. The point of this examination is to characterize another order strategy for identifying keratoconus dependent on factual investigation and to understand the expectation of these arranged information with intelligent systems. Results show that the developed algorithm guarantees an elite just as precision. The proposed framework can help the ophthalmologist in fast screening of its patients, subsequently decreasing demonstrative errors and encouraging treatment.
2 Literature Survey 1. Alexandru Lavric et al. [1] proposed information about detection of keratoconus using CNN. The main advantage is that it helps to obtain better accuracy of result. The algorithm implemented analyzes the eye's corneal topography, and then extracts and learns keratoconus eye characteristics.But the main disadvantage is it requires very large dataset and not suitable for small dataset.Proposed work is not open source.
2. Murat et al. [2] proposed method based on statistical analysis, authors proposed a new classification system for detecting keratoconus. The classification of statistical databases was used to identify all the assessed values to control patient progress in a healthier manner.
3. Filippo Castiglione et al. [3] proposed a new semi-automatic method for measuring the Keratoconus from the ultrasound images, thus eliminating errors in manual measurements. The algorithm used is useful for speeding up the manual measurements of corneal thinning and Keratoconus computing.But the drawback is it requires ultrasonic equipments which are costly.
4. Geethu S.S. et al. [4] proposed a method to convert the 2D eye image into 3D image as the 3D image provide more information than a 2D image. The existed 2D content doesn't provide depth information. The computation is very complex.

Valter
Wellington et al. [5] described model that shows the best measure of assessment to be adopted by the GAADT for evolutionary process considering some metrics. Imaging devices analyze cornea in a static way while the Ocular Response Analyser uses a dynamic evaluation.
6. Maolong Tang et al. [6] describe model that uses the application of mean curvature mapping as an alternate description of the topography of the cornea. The goal is to enhance the identification of keratoconus and other diseases marked by local corneal curvature increases.
7. Naoyuki Maeda et al. [7] proposed system that can be used to differentiate clinical keratoconus from other corneal topographies as a screening technique. This method of quantitative classification may also help to improve the clinical understanding of topographical maps 3 Proposed Methodology

CNN
CNNs have wide applications in the area of image and video recognition, recommendation systems and natural language handling. Same as NNs, CNNs are made up of neurons which have learnable weights. Each neuron gets different data input, expect authority over them a weighted aggregate. The entire framework has a component of setback and all the tips and misleads we worked for neural frameworks regardless of everything apply on CNN. Architecture of CNN is shown in Figure 1.

CNN operates over volume
Not at all like neural systems where input information is a vector, here the input information is a multichannel picture shown in Figure 2. There are likewise different varieties. Before we go further, let's first comprehend what convolution suggests.

Convolution
We took 5*5*3 kernel filter and apply it over the picture and along way performed product between the kernel filter and pieces of the given image shown in Figure 3. For each product, the result is a level up. The convolution layer is the principle element of a CNN system. The convolution layer includes a lot of free filters. Each filter is autonomously convolved with the picture. All filters are instated arbitrarily and turn into our parameters which will be found out by the system in this manner.

Pooling
A pooling layer is also main element of a CNN. Its capacity is to dynamically lessen the spatial size of the portrayal to decrease the measure of parameters and calculation in the system. The Pooling layer works on each component map independently. The most normal methodology utilized in pooling is max pooling which is shown in Figure 4.

RELU
RELU is only a non-linearity which is applied comparative to neural systems. The FC is the completely associated layer of neurons toward the end of CNN. Neurons in a completely associated layer have full associations with all activations in the past layer, as found in normal Neural Systems and work in a comparative way.

Inception
Inception is latest classification algorithm. It is faster than existing algorithms. Saves memory and perform faster computation.
Inception v3 is widely used for image classification with a pretrained deep neural network.Transfer learning from Inception V3 allows retraining the existing neural network in order to use it for solving custom image classification tasks, To add new classes of data to the pretrained Inception V3 model. we use the tensorflow-image-classifier repository. This repository contains a set of scripts to download the default version of the Inception V3 model and retrain it for classifying a new set of images.
Inception V3 is consisted of inception blocks. In each inception block, convolution filters of various dimensions and pooling are used on the input in parallel. They are concatenated along their channels just before providing output. Three types of inception blocks have been used in this architecture refer Figure.6.
To reduce number of parameters, often 1×1× (no. of input channels) sized 3D filters are used before any other operation on the input. Although the architecture is quite deep, the model has only about 25 million parameters. 1×1 filters also reduce the computation cost. The larger filters capture high abstract features, and the smaller ones capture local features. Inception is latest CNN algorithm. It is faster than existing algorith. Saves memory and perform faster computation.

System Design
The first step is to provide image to the system as an input. In next step, there are two phases training and testing phase respectively. At training phase, input image is given to the Expert Knowledge of the proposed system which contains labeled dataset. Features of the labeled image is extracted with the help of convolution and pooling and activation functions. Based on these features CNN based feature learning network is created which can be used in testing phase.
In testing or application phase, input image is provided to trained network. Trained CNN learning network extracts and analyses features of input and assign label to the input based on features and approximations. Result provided to the user.
System Design is shown in Figure 7

Future Work
The first changes that need to be made are further testing with the improved dataset. The dataset itself can be further improved by adding images, removing undesired ones, apply some image preprocessing techniques to improve images or by making the image content more specific.