Three Dimensional Data-Driven Multi Scale Atomic Representation of Optical Coherence Tomography


In this paper, we discuss about applications of different methods for decomposing a signal over elementary waveforms chosen in a family called a dictionary (atomic representations) in optical coherence tomography (OCT). If the representation is learned from the data, a nonparametric dictionary is defined with three fundamental properties of being data-driven, applicability on 3D, and working in multi-scale, which make it appropriate for processing of OCT images. We discuss about application of such representations including complex wavelet based K-SVD, and diffusion wavelets on OCT data. We introduce complex wavelet based K-SVD to take advantage of adaptability in dictionary learning methods to improve the performance of simple dual tree complex wavelets in speckle reduction of OCT datasets in 2D and 3D. The algorithm is evaluated on 144 randomly selected slices from twelve 3D OCTs taken by Topcon 3D OCT-1000 and Cirrus Zeiss Meditec. Improvement of contrast to noise ratio (CNR) (from 0.9 to 11.91 and from 3.09 to 88.9, respectively) is achieved. Furthermore, two approaches are proposed for image segmentation using diffusion. The first method is designing a competition between extended basis functions at each level and the second approach is defining a new distance for each level and clustering based on such distances. A combined algorithm, based on these two methods is then proposed for segmentation of retinal OCTs, which is able to localize 12 boundaries with unsigned border positioning error of 9.22 ±3.05 μm, on a test set of 20 slices selected from 13 3D OCTs.