StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Design And Analysis Of Algorithms For Obtaining Super Resolution Satellite Images - Essay Example

Cite this document
Summary
This research begins with the context and background, problem statement and challenges, potential benefits. Then it investigates fundamentals of image processing particularly its definition, image perception, image sampling and quantization, image enhancement and image reconstruction from projections…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER97.7% of users find it useful
Design And Analysis Of Algorithms For Obtaining Super Resolution Satellite Images
Read Text Preview

Extract of sample "Design And Analysis Of Algorithms For Obtaining Super Resolution Satellite Images"

Chapter One Context and background The launching of satellites nowadays, experience transformations due to the increasing need for High Resolution images. History shows that once satellites are launched, updating of its captured images faces a challenge due to resolution problems. Some algorithms have been developed, which assist in transforming Low Resolution images to High Resolution images. High Resolution (HR) images have a wide range of usage in the various fields, for example, medical imaging, video surveillance, and satellite imaging. However, due to limitations of hardware, many Low Resolution (LR) images are obtained than High Resolution images. As a result, researchers have come up with new techniques that help them in obtaining HR images from LR images. Researchers have come up with a reconstruction technique known as Super-resolution (SR) technique (Bannore, 2009). The technique solves the problem of developing HR images from LR images since it allows the recovery of high resolution images from low resolution images. The technique allows recovery of HR images from several LR images, which are blurred, noisy and down-sampled. The SR technique uses some algorithms in order to solve the resolution problem. These algorithms use LR images that are related to each other’s through random translations and rotations in order to create a single HR image of the original scene. In order to reconstruct the HR image, the LR images must be first registered relative to a specific frame of reference. Secondly, the pixels from the LR images are used to sparsely populate some of the pixels of the high resolution image. Finally, the pixel values at the remaining grid points are interpolated to generate an estimate of the HR image. One of the techniques of developing a HR image from a series of LR images is Super-resolution algorithm. This technique formulates a model, which relates HR image to the LR images (Bannore, 2009). Problem Statement and Challenges The purpose of this assignment is to develop an algorithm to obtain a HR image from a set of LR images captured by remote sensing satellites. The algorithm will be tried on images captured by X satellite. Based on the characteristics of the acquired images from this satellite an investigation should be done to choose the suitable method to reconstruct HR images. This may include the implementation of set of super resolution algorithms and comparing their performances. Although there is the quest of obtaining high resolution images, there are some challenges that go hand in hand with the high resolution. In acquiring high resolution imaging systems, one runs into the problem of diminishing returns. The optical components and imaging chips necessary for high resolution imaging are very expensive since they cost millions of dollars. Potential Benefits High image resolution is beneficial since there is an increase in image detail. In addition, image resolution results in images that do not contain noise or with reduced noise and images with increased smoothness in interlaced video. Chapter Two Fundamentals of Image Processing Introduction and definition Image processing refers to any kind of signal processing, where the input is an image, for instance, a video frame or a photograph and the resulting output of the image may be either an image or a set of parameters or characteristics that are related to the image. In most image processing techniques, the image is treated as a two-dimensional signal. The image processing techniques usually apply standard signal processing techniques to the image being processed. During image processing, various operations may be carried out on the image; such operations include Euclidean geometrical transformations that may take the form of reduction, rotation and enlargement, color corrections that may involve color balancing, color mapping, quantization and contrast adjustment (Burge & Burge, 2009). Image processing operations may also include interpolation, image registration, and image segmentation. There are two types of image processing techniques. These are the analog image processing and digital image processing. Digital image processing has several advantages over analog technique such as; it allows application of a wider range of algorithms to the input data, and can avoid signal distortion and build-up of noise. In addition, since images are defined in two dimensions, digital image processing can be modeled to multidimensional systems. Digital image processing permits the use of more complex algorithms in image processing, and thus, offers a more sophisticated performance in image processing than analog method. Digital image processing is the only applicable technology in classification, feature extraction, pattern recognition, multi scale signal analysis and projection. Image processing has seen its application in development of digital camera images and in intelligent transportation systems (Burger & Burge, 2009). Image Perception Image perception is very vital during image processing. Image perception refers to how different people see and construe the meaning and characteristics of an image. Different individuals will have different image perceptions, which may be as a result of image processing. Due to operations such as color coding, sampling, spatiotemporal filtering and linearization, people may have different image perception (Bannore, 2009). Image printing may result in a different image or may result in a different image characterization, which may make different people have different image perception after image processing. Color coding, for example, may bring different image information. The algorithmic principles of neuromorphic circuits may lead to different image perceptions. Image processing helps in improving the quality of images through such operations as reduction or elimination of noise in images and avoiding of signal distortions (Burger & Burge, 2009). This helps in giving image meanings that indeed assists in improving the image perception. It is through image processing that operations like color corrections, which involve color balancing, color mapping, quantization and contrast adjustment, bring an image to focus, thus increasing image resolution. This helps in improving the image perception. Image perception varies from one individual to another since different individuals have different visual capabilities. Image Sampling and Quantization Image quantization is used in digital image processing. It is the process that involves mapping of large set of input values into a smaller set, for instance, rounding off values to an exact unit. An algorithmic function or device that performs quantization is known as a quantizer (William, 1989). Quantization is involved in almost all digital signal processing since representing a signal in a digital form involves rounding. Quantization forms the center of all lossy compression algorithms. Since quantization is a many to few mapping, it is a nonlinear and irreversible process; hence, it is impossible to recover the precise input value given the output value only. There are two different classes of applications that use quantization. These are the rounding quantization, which involves conversion of a signal from analog to digital, and rate-distortion optimized quantization, which is used in source coding. Scalar quantization is the most common type of quantization. It is the process of mapping a scalar input value to a scalar output value using a quantization function. In order to create a digital image, sampling and quantization processes are important. If, for example, there is a continuous image, f (x, y), to convert it to digital form, sampling the function in both amplitude and coordinates is necessary. Digitizing of the coordinate values constitute image sampling while digitizing the amplitude values constitutes quantization. Dense sampling produce an image of high resolution in which, there are many pixels. On the other hand, coarse sampling produces images of low resolution having few pixels. Sampling refers to reduction of a continuous signal to a discrete signal (Burger & Burge, 2009). Image Enhancement Image enhancement is the process of improving quality of a digitally stored image through manipulation of the image using software. There are various programs that are used in image enhancement. The chief aim of image enhancement is to advance the interpretability or perception of report or information in images to the human viewers. It also helps to provide a better input for other processing techniques of automated images (Bannore, 2009). There are two broad categories of image enhancement, which include spatial domain methods and frequency domain methods. Spatial methods operate directly on pixels while frequency domain methods operate on the Fourier transformation of the image. Image enhancement operations may include addition of borders, cropping of an image, convolution filtering, median filtering and thresholding. All these operations are aimed at improving the quality of a digital image. In image enhancement, improvement of the image is done without laying emphasis about the source of degradation; usually, the source of degradation is not known (Williams, 1989). In case, the source of degradation is known, the process is known as image restoration. Image enhancement faces a challenge in that one is not capable of defining the desired measure of quality of an image. This is because; different people have different measure of quality in an image. As a result of image enhancement, image interpretation will be different among different people. Different methods of image enhancement are problem oriented since a method that works best in one case may be absolutely inadequate for solving another problem. Image Reconstruction from Projections Image reconstruction refers to a method of inputting 2-dimensional images into a computer in order to enhance or analyze the image into a form, which is more useful to the human viewer (Bannore, 2009). In computed tomography, an image has to be reconstructed from projections of the object. Here, different methods are used. These may include such techniques as iterative reconstruction technique and filtered back projection method. Iterative reconstruction entails iterative algorithms which are used to reconstruct 2-dimensional and 3-dimensional images into certain imaging techniques. The iterative technique is better than any other technique in computed tomography since it yields better results than all other methods. The main challenge sets in during computations; the technique is computationally more expensive compared to filtered back projection method. The chief advantages of iterative method include ability of reconstructing an optimal image even when there is an incomplete data and its improved insensitivity to noise. The technique has seen its application in emission tomography modalities such as PET and SPECT. In these modalities, noise statistics are considerably poor. The method is also considered important when one does not have availability of a large set of projections. In computed tomography, there are various algorithms, but each has to start with an assumed image, projections from the image computed and comparison between the actual and the calculated projections is done (Williams, 1989). Chapter Three According to Yuan, Zhang, Shen and Li, 2010, maximum a posteriori model that is usually referred to s MAP model is widely used among the reconstruction frameworks available. In the MAP model, regularization parameter plays a significant role. It controls the tradeoff between the prior item and the fidelity. In case, the parameter is very small, then noise will not be restrained effectively; on the contrary, the result of the reconstruction will become blurry. The optimal regularization parameter should be selected effectively. The regularization parameter can be selected using the MAP method based on the U-curve. In doing this, the U-curve function is first selected using the fidelity and prior terms, and then the optimal parameter is selected. This is usually the point at the left maximum curvature of the curve. After this, the algorithm is tested on both actual and simulated data. The MAP method is effective and robust in both quantitative terms and visual effects (Yuan, Zhang, Shen and Li, 2010). Irani and Peleg, 1991 argue that image resolution usually depends on the physical properties of the sensor, which entails the optics, density and the spatial response of the element detecting the image. An option of increasing resolution through sensor modification may be unavailable. Sampling rate could be increased by increasing more samples from the scene. Estimation of the sensor’s spatial response assists in obtaining sharper images. Super resolution is feasible for color image and monochrome sequences when the displacement can be calculated, and with knowledge of the imaging process. According to Irani and Peleg, 1991, when an iterative algorithm is applied to a single image without an increase in the sampling rate, super resolution results to de-blurring. Iterative algorithm works well for both real images and computer simulated images. It can also be executed in parallel to ensure faster hardware implementation, when using image sequences, accurate knowledge of relative displacements of scene regions is necessary. Elad and Feuer, 1997 are of the opinion that there are three chief tools in single image restoration theory. The three main tools are Maximum likelihood (ML), maximum a posteriori probability (MAP) estimator and an approach that uses projection onto convex sets (POCS). A hybrid algorithm, which combines the advantage of simplicity of ML estimator and ability of POCS to solve non-ellipsoids constraints. This algorithm solves constrained minimization convex problem. It combines all priori knowledge into the restoration process. The methods proposed by Elad and Feuer enable the improvement and restoration of image both in computation and visual point of view. These tools enhance super resolution restoration. Frequency domain algorithm is only possible in LSI case, where motion, blur, and decimation are space invariant. This restoration algorithm can be effectively implemented in a parallel scheme (Elad and Feuer, 1997). Goudy, Kubik, Rouge and Latry focused their study on spot-5 and Pleiades-HR which are remote sensing satellites. They show that improved performances have been achieved through minimal modification of the already designed instruments. In order to increase image capacity acquisition, spot-5 is fitted with two instruments, which are named HRG. Each consists of a rotating mirror that allows more frequent visit of its sites using off nadir viewing. It contains an on-board memory that allows storage of images recorded from all over the globe. According to Goudy, Kubik, Rouge and Latry, Spot-5 delivers panchromatic images having a ground resolution of five meters and a special mode of 2.5 meters. It provides multispectral images having 10 meters resolution, which is in three spectral bands, that is, Red, Green and Near Infra Red. The design of the Pleiades-HR system is mainly determined by specifications in radiometric quality of image in panchromatic band that supplies images with sharpest resolution. According to Ng and Bose, 2003, the achievement of super resolution from a series of degraded images that are under-sampled can be viewed as reconstruction of high resolution image from a set that is finite and its projections in a sampling lattice. This may be looked at as an optimization problem whose solution is obtained from minimization of a cost function. Image acquisition method is vital in the formulation of the degradation process. The model needs to be accurate in order to achieve super resolution of the desired quality. In order to keep the presentation alert and of reasonable size, data is acquired with multi-sensors. The resulting high resolution image is closely related to HDTV and VHD image sensors. Monochrome processing algorithms that are applied to Red, Green and Blue color channels are not optimal since they fail to incorporate spectral correlation between the three channels (Ng and Bose, 2003). Latry and Rouge, 2003 proposes that in order to raise panchromatic resolution of SPOT5, THR can be fitted in the instrument. This will lead to an optimization of the system. Obtaining a THR image is a difficult ground task, which involves complex steps such as de-noising, de-convolution, and quincunx interpolation. A THR panchromatic data is obtained by interleaving two five meters panchromatic images with 2.5 meters in both row and column directions. Interleaving of these two images gives a quincunx sampling grid, which is well suited to the instrument since this will make the instrument not bring large amount of information. This sampling has been proved to be optimal when the cut-off frequency of the system is determined by the detector size element. THR mode efficiency usually depends on radiometric and geometric performances (Latry and Rouge, 2003). According to S. Park, M. Park and M. Kang, 2003, HR implies that the pixel density of an image is high, thus HR image offers more details than LR image. They argue that the most direct solution to increasing spatial resolution is by reducing the pixel size through sensor manufacturing techniques. However, as the pixel size decreases, the light availability decreases. This leads to generation of shot noise, which degrades the quality of image severely. Hence, in order to avoid shot noise, there is an optimum pixel size that is proposed that help in curbing the problem. Spatial resolution may also be enhanced through increasing the chip size. This leads to increase in capacitance. However, this approach is not effective since increase in capacitance makes it cumbersome to speed up charge transfer (S. Park, M. Park and M. Kang, 2003). Hardie, 2007 proposes adaptive wiener filter super resolution algorithm. The algorithm obtains an improved resolution image from a series of low resolution video frames having overlapping field of view it uses sub-pixel registration in order to position every low resolution pixel value on a basic spatial grid referenced to the average position of frames used as inputs. According to Hardie, 2007, the adaptive wiener algorithm has a low computational complexity for transitional motions, and is suitable for real time processing applications. From the variety of super resolution algorithms such as frequency domain, iterative, learning based and interpolation-restoration, the simplest intuitively and computationally approach is the interpolation-restoration. Adaptive wiener filter produces lowest errors and lowest standard deviation error. According to Qiao and Liu, 2006, most super resolution algorithm ignore the illumination problem such as change of illumination direction and shadows, and thus they propose a logarithmic-wavelet transformation method that combines super resolution with shadow removing as a single operation. The wavelet transformation eliminates lighting effect in an image. After this transformation, the image defined by the vectors form the new image with construction of low dimensional subspace. Qiao and Liu, 2006 are of the opinion that HR image is reconstructed through manifold learning support, which is followed by POCS algorithm. Results show that logarithmic wavelet approach achieves image enhancement and super resolution simultaneously. For a color image, the wavelet approach only removes the illumination effect and super resolution is carried out using other methods such as interpolation. Merino and Nunez, 2007 propose the use of Super-Resolution Variable-Pixel linear Reconstruction. The algorithm combines different lower resolution images to obtain a higher resolution image as an end result. The method can make important spatial resolution improvements of satellite images. The SRVPR algorithm, basis its work on Variable-Pixel Linear Reconstruction algorithm that was developed by Hook and Fruchter. It preserves photometry and removes effect of geometric distortion on image shape. SRVPLR utilizes a non-uniform interpolation that has a low computational load that enhances real time application. In the model, degradation models are few and are only applicable when noise and blur are similar in all low resolution images (Merino and Nunez, 2007). Li, Jia and Fraser, 2008 propose a super resolution method; Maximum a Posteriori (MAP) based on a universal Hidden Markov Tree model written as MAP-uHMT. This is used for remote sensing of images. The uHMT in the wavelet domain has the purpose of setting up a prior model that reconstructs super resolution images from a series of warped, sub-sampled, blurred and contaminated LR images. In MAP-uHMT, damaging influences are considered during the imaging steps. MAP-uHMT super resolved image tend to be detailed and much sharper than the average image. The image is enlarged and aligned by usual scale of 4*4 through bilinear interpolation. The interpolated image is treated as blurred and wiener filter is utilized in deriving PSF, which indicates achievement of super resolution (Li, Jia and Fraser, 2008). Xiang-guang, 2008 addresses super resolution reconstruction design through the application of Intersecting Cortical Model (ICM) algorithm, which is applied to bilinear interpolation. According to Xiang-guang, image reconstruction super resolution, is dependent on data outliers. Based on a simplified PCNN, Xiang-guang proposes a design strategy that reduces the effects of outliers on a reconstructed image. ICM model derives from the studies of small mammal’s visual cortex. ICM model works in the same way as PCNN and produces binary output images from digital input image. The ICM can be described in the following way: first, proper set of ICM parameters makes the neuron corresponding to the outlier pixel output a pulse that leads the neighborhood to have iterations. Then, median filter is used to remove the outliers. Bilinear interpolation is then used to reconstruct the result of median filter (Xiang-guang, 2008). Xiang-guang, 2008 further proposes the use of ICM algorithm that is applied to the cubic spline interpolation. In image processing, the use of non-linear filter only removes outliers effectively, but keep image details sufficiently. Currently, there are various non-linear filter algorithms like median filter, stack filter, and morphology filter. According to Xiang-guang, 2008, the cubic spline interpolation is capable of keeping image details sufficiently provided that the non-Gaussian outliers are not in existence. For images contaminated with pepper and salt outliers, the effect of cubic spline interpolation algorithm is not sufficient. After applying the median filter, the result is reconstructed using cubic spline interpolation algorithm. According to Pu, Jin and Liu, 2007, super resolution is a good selection that leads to acquisition of higher resolution image without change of hardware. The algorithms are used to get higher resolution images from original low resolution images. Pu, Jin and Liu propose the use of MAP algorithm and MMAP algorithm together with wavelet filtering theory to improve image quality restoration and computational speed. Combination of MAP algorithm with wavelet filtering yields a post-wavelet MAP algorithm (PWMAP). This algorithm has a super resolution ability that is better than MAP algorithm. The calculation speed of PWMAP is much faster than that of MMAP algorithm. Super restoration algorithm is chiefly divided into two types, the linear and non-linear algorithm. Linear restoration is mainly based on linear system. MPMAP algorithm effectively reduces ring phenomenon in the iterative algorithm and thus, improve the restoration effect. PWMAP works best since when ring phenomenon starts to appear, most of the ring phenomenon is eliminated by wavelet filtering. When noise is relatively large, PWMAP has a stronger capacity of restraining the noise. The main limitation of PWMAP is that it cannot meet the requirement of real time treatment since it is still an iterative algorithm (Pu, Jin and Liu, 2007). Xiao-feng and He-fei, 2010 proposed an edge adaptive interpolation algorithm for image super resolution reconstruction. The main objective of the algorithm was to achieve a high resolution image from a low resolution image. First, from a LR image, a HR image is formed through bilinear interpolation and there is detection of its edges. Then, the edges of the original HR image are refined through two approaches. The first approach is based on the geometric duality between HR covariance and LR covariance. The second approach is based on the local structure feature of the image. The main idea behind edge adaptive interpolation is estimation of local covariance coefficients from LR image and then utilizes the covariance estimates in adapting the interpolation at a HR based on geometric duality between HR covariance and LR covariance. Edge adaptive based image is capable of overcoming influences of varying illumination during interpolation effect. It is capable of reducing jaggy-noise in edges and reduces computation since it deals with pixels (Xiao-feng and He-fei, 2010). Begin and Ferrie, 2006 argues that MRF-based super resolution algorithm works at improving a previously interpolated image. However, the degree of improvement differs according to the quality measure chosen and image category. Method using RFFs gives higher values of correlation coefficient and for MSSIM. In PSNR, MRI and RS images have higher values than the MRF method. The MRF method is noise sensitive, and this explains why the method has many problems in super resolving this category of image. According to Begin and Ferrie, 2006, the method using MRFs works better than image analogies framework although both methods constitute learning-based algorithm MRF algorithm improves images that are interpolated with standard methods. However, the degree of improvement of the images varies with the category of images. MRF method always increases the value of MSSIM; hence, MRF method increases the quality of MSSIM. According to Pestak, 2010, is of the opinion that there are factors that limit image resolution in all imaging systems. According to Pestak, decreasing of pixel sizes is not always viable. There is an optimal size of pixel that needs to be withheld in order to ensure optimum image resolution. Most super resolution algorithms follow the sequence of registration, interpolation and restoration. Super resolution image reconstruction is normally modeled as an inverse problem. This implies that the goal of super resolution is towards reversing the effects of warping, under-sampling and blurring, which relates to LR images in order to obtain a HR image. There exists physical limitations and cost trade-offs in acquiring images. Due to this reason, image processing algorithms usually represent chances for increased spatial resolution in case, manufacturing techniques are not attainable. Single image interpolation has limitations in its ability to recover higher frequency machinery. Most SR image reconstruction models are ill-posed due to the inverse nature of the model. The problem may be regularized through sub-pixel shifts and having a priori knowledge on the solution. This makes SR reconstruction to be well-posed. Regularization methods have an advantage of modeling noise in the system (Pestak, 2010). According to Sorrentino and Antoniou, 2008, multi-frame super resolution algorithms can be utilized in reconstruction of high quality HR image from several blurred, under-sampled, warped and noisy images. A widely used method of implementing these algorithms is through optimization-based model inversion. These methods are easy to implement but have poor convergence properties and are sensitive to various numerical ill-conditioning. Multi-frame super resolution problem can be eliminated through use of quasi-Newton algorithms and proposing efficient implementations. Use of efficient quasi-Newton algorithms for multi-frame SR problem should be highly encouraged since it results in images of better quality and improved convergence speed than SD method. In addition, use of quasi-Newton algorithm enhances high quality image reconstruction (Sorrentino and Antoniou, 2008). According to Lin and Shum, 2001, there exist limitations in super resolution. The performance of Reconstruction-based algorithms (RBA) is affected by various factors. These factors include the level of noise present in LR images, accuracy of PSF estimation, accuracy of registration and geometric distortion. The higher the noise level and poor PSF estimation and registration, the less improvement will be in resolution. In real practice, improvement in resolution is limited. Present algorithms that are not limited to RBAs only produce images with undesirable characteristics when magnification factor is increased or becomes large (Lin and Shum, 2001). According to Martins, A. Homem, M. and Mascarenhas, N. 2007, Markov random field (MRF) procedure can be used for super resolution. In this procedure, potts-strauss model is assumed for a priori probability density function of the actual image. First step is guided by aligning of all LR observations over a HR grid, then improving resolutions through Iterated Conditional Modes (ICM) algorithm. The technique was analyzed by considering a number of simulated LR and all translated observations. ICM algorithm yields a feasible alternative while computing MAP for an actual image with a given set of observations. MAP algorithms make huge computational demands because of the inherent difficulty in the computation of MAP estimate. ICM ignores large scale deficiencies of the a priori probability for the true image, and is undemanding computationally. Registration procedure of an image always gives better results than interpolation of image (Martins, A. Homem, M. and Mascarenhas, N. 2007). Work Cited Williams, J.B. Image Clarity: High-Resolution Photography. New York: Focal Press, 1989. Print. Bannore, Vivek. Iterative-Interpolation Super-Resolution Image Reconstruction: A Computationally Efficient Technique. New York: Springer, 2009. Print. Burger and Burge. Principles of Digital Image Processing: Fundamental Techniques. New York: Springer, 2009. Print. Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(“Design And Analysis Of Algorithms For Obtaining Super Resolution Essay”, n.d.)
Retrieved from https://studentshare.org/information-technology/1394242-design-and-analysis-of-algorithms-for-obtaining
(Design And Analysis Of Algorithms For Obtaining Super Resolution Essay)
https://studentshare.org/information-technology/1394242-design-and-analysis-of-algorithms-for-obtaining.
“Design And Analysis Of Algorithms For Obtaining Super Resolution Essay”, n.d. https://studentshare.org/information-technology/1394242-design-and-analysis-of-algorithms-for-obtaining.
  • Cited: 0 times

CHECK THESE SAMPLES OF Design And Analysis Of Algorithms For Obtaining Super Resolution Satellite Images

Iris recognition system using principal component analysis

analysis of Hamming Distance 5.... hellip; The eye images are acquired by camera and then interfaced to the software.... Sample eye images from CASIA database Fig.... Iris recognition system using principal component analysis Abstract Iris recognition has been a challenge in the past with the recognition accuracy being low.... Principal component analysis has been used to reduce the dimensionality.... Principal Component analysis 2....
60 Pages (15000 words) Dissertation

Next Generation Weather Satellites

As a result, there is a need to look at the next generational weather satellites based on their design and technological features onboard that are used for weather forecasting and planning purposes.... Single components cost more than bulk component sales, thus there is need to consider the pricing in the designing of the next generation weather satellites based on price, where in spite of the high cost of design and production of the satellites, bulk production of the satellites is more viable due to lowering the cost of the components....
10 Pages (2500 words) Research Paper

Satellite Technologies as the Subject of the Broker by Grisham

This paper "satellite Technologies as the Subject of the Broker by Grisham" presents spy satellites to impact negatively upon the rights of individual citizens.... “The Broker” is Joel Backman, a lawyer/lobbyist who went to prison six years ago for conspiring to sell a satellite system to an unnamed party outside the United States.... Backman had been sent to prison six years ago because he had been the broker in a deal to control the software that operates the latest to-secret spy satellite system....
6 Pages (1500 words) Book Report/Review

Discrete Mathematics and Mathematical Algorithms

The comprehensive study and areas of interest in discrete mathematics generally comprise the analysis and systematic study of algorithms, their efficiencies and implementations in various fields of life.... Several cases of consideration of algorithms are the strategies planned for summing up and finding difference between two or more... This paper presents a detailed analysis of the existing one of most famous paradigms of discrete mathematics which is known as “Mathematical Algorithms”....
9 Pages (2250 words) Research Paper

See attachments

Sun synchronous orbits are those that combine inclination and altitude in such a manner that the satellite on the orbits passes over a particular point on the surface of the planet at the same local solar time.... Such orbits place a satellite on a constant sunlight exposure and are useful for spying, imaging and as weather satellites....
40 Pages (10000 words) Essay

Algorithms for Breast Cancer Decision Phase

This coursework "algorithms for Breast Cancer Decision Phase" focuses on diagnosis algorithms and Computer-aided detection that have been developed to assist radiologists in giving an accurate diagnosis and to decrease the number of false positives.... Screening programs create a great number of mammographic images that radiologists have to interpret....
11 Pages (2750 words) Coursework

Future of 3-D Data Capture

satellite images are visible in different colors and spectra and this is useful in interpreting data on earth's activities.... Satellite imagery is one of the most powerful forms of capturing data or images of earth forms or the world.... ResolutionSatellite images are better described by through use of their different resolution used in satellite analysis.... Temporal resolution is characterized the imagery collection achieved through differences in time passage between images....
10 Pages (2500 words) Essay

Path Planing Algorithms - Dijkstras Algorithm

… The paper "Path Planning algorithms - Dijkstra's Algorithm" is an outstanding example of an essay on logic and programming.... The paper "Path Planning algorithms - Dijkstra's Algorithm" is an outstanding example of an essay on logic and programming.... The paper analyses the two algorithms with respect to their types, running times, and efficiency....
12 Pages (3000 words) Essay
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us