StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Computer Vision Application: Fault Detection Techniques - Dissertation Example

Cite this document
Summary
The author of the paper "Computer Vision Application: Fault Detection Techniques" will begin with the statement that it has been a focus of research activities since the advent of modern communication systems, producing numerous fault localization techniques…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER96% of users find it useful
Computer Vision Application: Fault Detection Techniques
Read Text Preview

Extract of sample "Computer Vision Application: Fault Detection Techniques"

? It has been a focus of research activities since the advent of modern communication systems, producing numerous fault localization techniques. However, with the evolution of communication systems, which in turn has become more complex therefore offering new capabilities, the requirements imposed on fault localization techniques have changed as well. It is fair to say that despite this research effort, fault localization in complex communication systems remains to bean open research problem.1 There are plenty of challenges in fault localization and complex communication systems in particular which have presented numerous proposed solutions from a lot of researchers in the course of the last ten years, all presenting certain advantages and shortcomings. In the case of seismic data, sensor networks provide us with information about phenomena or events at a much higher level of detail than previously available. In order to make meaningful conclusions with sensor data, the quality of the data received must be ensured. While the use of sensor networks in embedded sensing applications has been accelerating, data integrity tools have not kept pace with this growth. One root cause of this is a lack of in-depth understanding of the types of faults and features associated with faults that can occur in sensor networks. Without a good model of faults in a sensor network, one cannot design an effective fault detection process The potentials for faults in distributed computing systems is a significant complicating factor for application developers. Though there are a variety of techniques which exist for detecting and correcting faults, the implementation of these techniques in a particular context can be difficult. There are various ways in which fault detection services may be designed to be incorporated, in a modular fashion, into distributed computing systems, tools, or applications. There are also services which utilizes well-known techniques founded on unreliable fault detectors to detect and report component failure, while permits the user to trade off timeliness of reporting against false positive rates. Concern over the violent release of the high pressure fluids and paleo-stresses that are sometimes trapped by deep faults and dykes has led deep mine planners to seek a technology that is capable of detection of geological structures with a displacement of 2m 200 m ahead of mining at an acceptable cost. A similar mapping need is met in coal mining, where a throw of less than a meter can bring a long-wall coal shearer to its knees, and inject ash over 10% of the face line if it is decided to carry the fault. Monitoring large systems with multiple non-linearly interconnected parts and multiple data outputs for detecting and diagnosing purposes is a problem that has been both widely and sparely studied. Widely, because it occurs almost ubiquitously in real-life applications of data mining and model-based reasoning; sparely because, at least in the data mining context, there exists no real consensus for what a complex system actually is. It is not clear if it is useful to postulate that a system with a causal graph above a certain level of complexity is to be defined as “complex”. Nor can the size of the system or the extracted data be good measures of complexity. A technique which may be used is the Level Set technique wherein the normal evolution task is aimed to determining the location and shape of the targeted surface in 3D, though the extent will not be included. It is considered as a maximum a posteriori (MAP) estimation problem within a statistical framework. 2 The technique works by evolving an initial surface based on the number of intrinsic and extrinsic parameters, wherein the surface can exhibit dynamic growth and shrinking according to the underlying fault surface. The initial conditions have to be defined for the level set therefore providing a starting point from which the surface evolves. The initial conditions are created either automatically or manually and utilized as seed points for the level set calculation. The level set will then be computed in 3-D at each voxel with the use of an evolution equation based on a fault-likelihood propagation value and a topological curvature constraint. The cluster of techniques will segregate the planar representation into discrete surfaces founded on geologic dip and azimuth direction. An incorporated 3D visualization technique will then be used in creating a final interpretation by simple mouse strokes for merging the discrete surfaces into faults. The results will then be compared to a human interpretation of the data in order to establish effectiveness and validity of results.3 The process of deducing the exact source of a failure from a set of observed failure indications fault localization is a central aspect of network fault management. In ant tracking, surfaces often appears more like trends rather than continuous surfaces. The fault extraction step in this technique will be extracting the faults which appears to be continuous, using the principles of swarm intelligence. Most Internet services like e-commerce, search engines and so on all suffer faults. Being able to quickly detecting these faults can be the largest bottleneck in improving availability of the system. A methodology for automating fault detection in Internet services by first, observing low-level internal structural behaviors of the service then, modeling the majority behavior of the system as correct; and thirdly, detecting anomalies in these behaviors as possible symptoms of failures. Fault injection is important to evaluating the dependability of computer systems. Researchers and engineers have created many novel methods to inject faults, which can be implemented in both hardware and software. Software methods are convenient for directly producing changes at the software-state level. Thus, we use hardware methods to evaluate low-level error detection and masking mechanisms, and software methods to test higher level mechanisms. Software methods are less expensive, but they also incur a higher perturbation overhead because they execute software on the target system. Research also indicates despite the most acceptable coherence measures, these methods are typified by a lack of robustness, particularly when dealing with noisy data.4 Fault injection is another important concept to evaluating the dependability of computer systems. Researchers and engineers have created many novel methods to inject faults, which can be implemented in both hardware and software. 5 One of the techniques which can be used in this study is Ant Tracking. With respect to the noisy nature of the attributes, extraction of surfaces from fault attributes is quite difficult. The process of deducing the exact source of a failure from a set of observed failure indications or fault localization is a central aspect of network fault management. In ant tracking, surfaces often appears more like trends rather than continuous surfaces. The fault extraction step in this technique will be extracting the faults which appears to be continuous, using the principles of swarm intelligence. The Ant-Tracking algorithm was utilized to generate a fault attribute volume. Several steps are involved in the work flow including data pre-conditioning, edge detection and Ant-Tracking. The data pre-conditioning step applies an edge preserving smoother to the input seismic in order to enhance reflector continuity while preserving discontinuous events corresponding to the presence of faults. Next, an edge detection method is implemented to extract features that represent discontinuities in the seismic image. This process however, will detect all discontinuities including faults, channel boundaries, acquisition footprint, reflector amplitude variations and various other features that disrupt reflector continuity. The Ant-Tracking algorithm then extracts features within the edge detection volume that exhibit fault-like behavior. It uses principles from ant colony systems to extract like trends in a noisy data environment. Digital intelligent agents (ants) are distributed throughout the edge detection volume and extracts features that exhibit characteristics in terms of consistency in dip, azimuth, planarity and spatial continuity. The output is then an attribute volume that corresponds to only fault like features. There are two common uses of the term detection in computer vision. The first one is concerned with determining whether or not an object is present in the image, while the other is concerned with localizing the object in the image, given that it is present. This work uses the term detection in the latter sense. The term localization could have been used here but we note that open surface detection includes both localization of the open surface in space as well as determination of the shape and extent of the deformable surface. There are also researches about automatic extraction of faults in Seismic Data,6 an interpretation of which proves to be a time consuming task to do in today's age. The first step in this technique is the creation of an attributes cube that would be used to capture the faults by locally having high energy levels along the fault surfaces and low energy levels on the rest. The consequent attribute cube be conditioned afterwards, enabling the extraction of high energy surfaces as separate objects. Enhancing faults entails enhancing the discontinuities that are present in the data, which may tend to be an ambiguous method however, since the the intersections between the various reflection layers comprises great changes in the amplitude which may be alleviated by using different estimates. Dip and azimuth estimate, for one, can be done by using the normal point on the reflection layer. This can be identified by calculating the gradient in that location and using one partial derivative for each dimension. The chaos attribute, on the otherhand may be used not only to enhance faults but also in identifying chaotic textures within the seismic data. It can be adjusted specifically for fault detection by taking into consideration the elongated, more or less vertical nature of faults. With the faults appearing as changes in amplitude in the reflectors, edge enhancement attribute will enable the enhancement of faults by measuring the changes in the signal amplitude. Sharp edges comprises the intersections between various layers and will produce large outputs by utilizing conventional edge detection techniques. By using the local dip estimates of the reflection layers, this problem will be reduced. 7 The local dip estimation represents a plane, and by projecting the vector with the derivatives, and the changes which are nearly perpendicular to the reflector will bring forth vectors with small magnitudes, whereas changes in the direction of the reflector will produce vectors with larger magnitudes. By taking the magnitude of the projected vector as the attribute value makes this attribute dependent on the amplitude in the seismic data. Faults in areas of low amplitude will thus have a weak signature which may be hard to detect for a human interpreter. The visual appearance can be corrected for by applying some amplitude correction, but with the appropriate subsequent steps this may prove unnecessary in an automated fault extraction setting.8 Identification, location, and extraction of individual fault surfaces has been a major factor in structural interpretation wherein fault surfaces are very important in hydrocarbon exploration due to their direct relation to hydrocarbon accumulation and to hydrocarbon flow paths. Extraction of individual fault surfaces from seismic data is, for the most part, a qualitative procedure which has been defined by the necessity for a thorough human data interpretation. 9 The recent years has shown development in projecting stratigraphic and structural discontinuities with the coherence or variance methods, which focuses on the similarities or dissimilarities of a small number of neighboring traces in determining said discontinuities. However, the methods in existence, are not very precise and has the tendency to be inefficient when it comes to isolating fine details, such as fault surfaces or sedimentary layer interfaces that may be only one pixel wide. Algorithms which may be used can facilitate the extraction, separation and labeling of the fault surfaces, by distinguishing portions of the surfaces and combining them into large distinct fault surfaces. 10 Modern designs and technologies make external testing increasingly difficult and the built-in self test (BIST) has come forth as a promising solution to the VLSI testing problem. The main components of a BIST scheme are the test pattern generator (TPG), the response compactor, and the signature analyzer. The test generator employs a sequence of patterns to the circuit under test (CUT), the responses are compacted into a signature by the response compactor, and the signature is compared to a fault-free reference value. In BIST, an adjacency test pattern generation scheme can generate robust test patterns effectively. Traditional adjacency test pattern generation schemes use an linear feedback shift register (LFSR) to generate initial vectors but they can not handle circuits with more than 30 inputs, wherein the Seeds technique may be applied. Based on analysis of independent partial circuits in the circuit under test, an algorithm may be used to generate the seeds, or the small number of necessary initial vectors. Through combining outputs of the shift register, the number of shift register stages is reduced. Experiments show that the method has maximum fault coverage, and short test length that means shortest time. The software expense is at the same level as traditional methods. Three-dimensional seismic data has been used to explore the Earth's crust for over 30 years, yet the imaging and subsequent identification of geologic features in the data remains a time consuming manual task. Most current approaches fail to realistically model many 3-D geologic features and offer no integrated segmentation capabilities. In the image processing community, image structure analysis techniques have demonstrated encouraging results through filters that enhance feature structure using partial derivative information. These techniques are only beginning to be applied to the field of seismic interpretation and the information they generate remains to be explored for feature segmentation. Planar based methods, dynamic implicit surfaces, implemented with level set methods, have shown great potential in the computational sciences for applications such as modeling, simulation, and segmentation. Level set methods allow implicit handling of complex topologies deformed by operations where large changes can occur without destroying the level set representation. Many real-world objects can be represented as an implicit surface but further interpretation of those surfaces is often severely limited, such as the growth and segmentation of plane-like and high positive curvature features. In addition, the complexity of many evolving surfaces requires visual monitoring and user control in order to achieve preferred results. Point based techniques the otherhand, such as in the case of marked point process, faults can intersect. Their maximum displacement is related by a power law to their maximum length and their size distribution which is generated randomly, by using the fractal power law. Because each fault displaces a subseismic horizon, each fault has constraints as to how much displacement is possible, based on the seismic resolution or an user-defined tolerance. The marked point process produces faults more inline with fault statistics and density, while Seeds is a design for testability methodology aimed at detecting faulty components in a system by incorporating test logic on-chip. In addition, Seeds can work hand in hand with Level Sets. References: Israel Cohen, Nicholas Coult, and Anthony A. Vassiliou. “Detection and extraction of fault surfaces in 3D Seismic Data” Trygve Randen, et al. “Automatic Detection and Extraction of Faults from Three- Dimensional Seismic Data.” “Mathematics in Industry. Mathematical Methods and Modelling in Hydrocarbon Exploration and Production.” Edited by Hans-Georg Bock, et al. Benjamin J. Kadlec, et al. “Interactive 3-D Computation of Fault Surfaces Using Level Sets.” Barna Kerezstes, Olivier Lavialle, Monica Borda. “Seismic Fault Detection Using Marked Point Process.” David Gibson, Michael Spann, Jonathan Turner, and Timothy Wright. “Fault Surface Detection in 3-D Seismic Data” Biswajit Bose. 2009. “Detecting Open Surfaces in Three Dimensions. Massachusetts Institute of Technology.” Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(“Computer Vision Application: Fault Detection Techniques Dissertation”, n.d.)
Retrieved from https://studentshare.org/other/1426396-computer-vision-application-fault-detection
(Computer Vision Application: Fault Detection Techniques Dissertation)
https://studentshare.org/other/1426396-computer-vision-application-fault-detection.
“Computer Vision Application: Fault Detection Techniques Dissertation”, n.d. https://studentshare.org/other/1426396-computer-vision-application-fault-detection.
  • Cited: 0 times

CHECK THESE SAMPLES OF Computer Vision Application: Fault Detection Techniques

Is Computer a Thinking Machine

Is computer a Thinking Machine?... The work by Pinker ultimately claims that computer is a thinking machine.... However, there is no such computer that shows such a preference.... In other words, a computer that functions in the accounts section of an office is unlikely to claim even after years and years of experience that it would prefer to work in the front office as the work is easier there.... This does not happen with a computer, for example, even if a computer is attacked by a virus for a number of times, it does not develop any new system on its own to counter the attack if an anti-virus is not installed by a human....
3 Pages (750 words) Admission/Application Essay

My Vision regarding Proper Education

Therefore, he/she will not compromise for anything that tries to pass itself as education, and would rather go for a value-based system of education that carries a proper vision.... Therefore, I will not compromise for anything that tries to pass itself as education, and would rather go for a well-defined, value-based system of education that carries a proper vision.... My vision regarding proper education is to gain as much information and knowledge as possible for the sake of the entire humanity....
2 Pages (500 words) Admission/Application Essay

Why do you want to study computer science

According to my thinking, every student should study computer science as a compulsory subject as it is the need of the hour.... Nowadays, computer is being used excessively in each and every institution of a country due to… In today's world, we cannot live without studying computer science.... It is due to computer science that the world can be regarded as a globe.... computer science has blessed us with many advantages due to which, I feel When I was a child, I developed liking for computer....
4 Pages (1000 words) Admission/Application Essay

Computer Forensics: Admissibility of Evidence

Investigators had to develop methods and techniques to access information from computers.... Many arguments arise as to the reliability and… However, with the prevalence of technology in our everyday lives, computer forensics is an obvious by-product.... Whitcomb in his article, “The Evolution of Digital Evidence in Forensic Science Laboratories,” describes how computer forensics has grown over the last four decades.... This history shows how the need of computer forensics grew into what it is today....
6 Pages (1500 words) Admission/Application Essay

Develop a Vision Statement

Generation 08 Cleaning Service vision is to be a leading partnership in Kenya in our field and grow in all dimensions, expanding in areas of operation while giving highly satisfactory services to our customers through emphasis of the quality of our services and at the same time… The partnership hopes to give a new look to the areas that it operates in and to become a power to reckon with in the cleaning vision ment Generation 08 Cleaning Service vision is to be a leading partnership in Kenya in our field and grow in all dimensions, expanding in areas of operation while giving highly satisfactory services to our customers through emphasis of the quality of our services and at the same time coming up with innovative methods of cleaning that will give a new meaning to cleaning services and set the pace for our competitors....
2 Pages (500 words) Admission/Application Essay

Conducting Independent Scholarship

The admission essay "Conducting Independent Scholarship " states that students conducting PG Independent Scholarship, UG Applied Research Project (UG ARP), UG Maths Projects (UG MP) or Learning-through-Work (LTW) projects must complete this form and submit to their project supervisors for approval....
6 Pages (1500 words) Admission/Application Essay

Personal Mission Statement

Latham (129) points out that all techniques have their downsides including goal setting.... For a business to have a direction, it is crucial to have an overall goal that the business should aim to reach at the end of its journey.... The same way, goal setting is crucial in human… Setting a goal in life means designing the coordinates that one should follow in their career path....
6 Pages (1500 words) Admission/Application Essay

Application letter for an accounting internship

This application is in relation… I am a self-motivated individual whose main aim in life is to make the lives of people better and simpler in any way possible.... ef: JOB application FOR A POSITION OF PART-TIME PAID ACCOUNTING INTERN.... This application is in relation to your advertised position on the internet.... My accounting skills are excel and especially quick books which I have practiced with before, I can compute accounts receivable, accounts payable, cash disbursement as well as cash application quite efficiently....
1 Pages (250 words) Admission/Application Essay
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us