about building a machine learning sandbox

about building a machine learning sandbox

If you want to set up one or more isolated environments on a single machine where you can play with your data using popular machine learning tools and libraries, install Anaconda Distribution on Oracle Cloud Infrastructure Compute.

Anaconda is a general purpose tool for designing, building, and managing data science projects. With Anaconda, you have access to over 1,500 data science packages in R and Python. It manages libraries such as TensorFlow, NumPy, pandas, scikit-learn, and more. It also handles installing and updating machine learning environments such as Jupyter Notebook and RStudio.

Each environment is independent and isolated from the others. Each can have its own version of Python, or R, or any other language, tooling, and library combination. This set up allows you to have several independent projects on one system where you can easily switch from one environment to the other.

Although expertise in machine learning and computer systems is not required to learn from this Solution, you should at least have some knowledge of technologies and processes used for collecting, moving, and transforming data.

You can use either Oracle Linux 7.7 or Ubuntu 18.04. In this Solution we show you how to use both. We use the GPU compute shape VM.GPU3.1 which has one NVIDIA Tesla V100 GPU and 6 OCPUs, but you can also set up a sandbox on a non-GPU shape.

This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.

If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable:

U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.

This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.

This software or hardware and documentation may provide access to or information about content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services, except as set forth in an applicable agreement between you and Oracle.

Oracle customers that have purchased support have access to electronic support through My Oracle Support. For information, visit https://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit https://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.

all about sand casting - what it is and how it works

all about sand casting - what it is and how it works

Metal casting is one of the most basic yet most useful manufacturing methods available to designers. These processes involve pouring molten metal inside of a preformed mold, which becomes a finalized part when cooled. The ability to shape metal without the need for machining has allowed for the mass production of complex parts that are both durable and inexpensive. As a result, there are many processes used to cast metal, and this article will highlight the most widely used casting method, sand casting. This process uses sand to create any number of complex mold shapes, and this article will show how this process works, how it fares against other methods, and where sand casting is used in industry today.

Sand casting is a casting process by which sand is used to create a mold, after which liquid metal is poured into this mold to create a part. To learn about the other forms of casting, visit our article on the types of casting processes. Sand is used in this method because it insulates well, it is relatively cheap, and it can be formed into any number of mold shapes. There are defined steps to this process (shown simplified in Figure 1), and this article will walk through each of these steps to illustrate exactly how this casting procedure is conducted.

The first step in the sand casting process involves fabricating the foundry pattern - the replica of the exterior of the casting - for the mold. These patterns are often made from materials such as wood or plastic and are oversized to allow the cast metal to shrink when cooling. They are used to create the sand mold for the final part, and can potentially be reused depending upon the pattern material. Often times, two pattern halves are separately created which provides cavities when put together (shown in Figure 1). Cores are internal mold inserts that can also be used if interior contours are needed, but are typically disposable after one casting. The type of pattern and its material is dictated not only by the desired part dimensions but also by the number of castings needed from each mold.

The second step is the process of making the sand mold(s) from these patterns. The sand mold is usually done in two halves, where one side of the mold is made with one pattern and another side is made using the other pattern (shown in Figure 1). While the molds may not always be in two halves, this arrangement provides the easiest method of both creating the mold and accessing the part, once cast. The top part of the mold is known as the cope and the bottom half is the drag, and both are made by packing sand into a container (a flask) around the patterns. The operator must firmly pack (or ram) the sand into each pattern to ensure there is no loose sand, and this can be done either by hand or by machine. After ramming, the patterns are removed and leave their exterior contours in the sand, where manufacturers can then create channels and connections (known as gates/runners) into the drag and a funnel in the cope (known as a sprue). These gates/runners and sprues are necessary for an accurate casting, as the runners and gates allow the metal to enter every part of the mold while the sprue allows for easy pouring into the mold.

The third main step in sand casting is clamping the drag and cope together, making a complete mold. If a core is needed for some internal contours, it would be placed into the mold before the clamping step, and any gating/runners are also checked for misalignments.

The fourth step begins when the desired final material (almost exclusively some metal) is melted in a furnace, and is then poured into the mold. It is carefully poured/ladled into the sprue of the mold, where the molten metal will conform to the cavity left by the patterns, and then left to cool completely. After the metal is no longer hot, manufacturers will remove the sand from the mold (via vibrations, waterjets, and other non-destructive means, known as shakeout) to reveal the rough final part.

The fifth and final step (not shown in Figure 1) is the cleaning step, where the rough part is refined to its final shape. This cleaning includes removing the gating system and runners, as well as any residual mold/core parts the remains in the final piece. The part is trimmed in areas of excess, and the surface of the casting can be sanded/polished to a desired finish. After major cleaning, each part is inspected for defects and is tested to ensure compliance with the manufacturers standards of quality, so that they will perform as intended in their respective applications.

The sand casting process has numerous advantages, especially over investment casting, another popular casting method (to learn more, read our article all about investment casting). This section will briefly explore why sand casting is so widely distributed in industry, as well as where it falls short as a manufacturing method.

So while sand casting may be a cheaper alternative to investment casting and can provide much more complex shapes, it takes a lot more legwork to get the same accuracy, finish, and overall part quality.

It is difficult to grasp how many different technologies use sand casting. Its versatility as a casting process makes it ideal for almost any complex part, and almost every modern technology benefits from this manufacturing process. Below is a list of only a few of the products which are fabricated using the sand casting process, which shows just how varied the possible applications can be.

Sand casting, while nowhere near as precise as investment casting, is a low-cost, low complexity manufacturing process that has repeatedly proven itself as an integral part of modern manufacturing. If investment casting is too cumbersome, or if large parts are needed, consider implementing sand casting into your production line.

This article presented a brief overview of the sand casting process. For information on other products, consult our additional guides or visit the Thomas Supplier Discovery Platform to locate potential sources of supply or view details on specific products.

Copyright 2021 Thomas Publishing Company. All Rights Reserved. See Terms and Conditions, Privacy Statement and California Do Not Track Notice. Website Last Modified July 9, 2021. Thomas Register and Thomas Regional are part of Thomasnet.com. Thomasnet Is A Registered Trademark Of Thomas Publishing Company.

machine learning application to automatically classify heavy minerals in river sand by using sem/eds data - sciencedirect

machine learning application to automatically classify heavy minerals in river sand by using sem/eds data - sciencedirect

A reliable method for classifying heavy minerals based on SEM/EDS data was developed by applying machine learning methods.Twenty-six elemental compositions of heavy minerals are chosen as the decision attributes.Random Forest is the most effective classifier for EDS data classification.The classification of heavy minerals data under the 6-second test provides a fast and convenient method.

Heavy minerals are generally trace components of sand or sandstone. Fast and accurate heavy mineral classification has become a necessity. Energy Dispersive X-ray Spectrometers (EDS) integrated with Scanning Electron Microscopy (SEM) were used to obtain rapid heavy mineral elemental compositions. However, mineral identification is challenging since there are wide ranges of spectral datasets for natural minerals. This study aimed to find a reliable, machine learning classifier for identifying various heavy minerals based on EDS data. After selecting 22 distinct heavy minerals from modern river sands, we obtained their elemental data by SEM/EDS. The elemental data from a total of 3067 mineral grains were collected under various instrumental conditions. We compared the classification performance of four classifiers (Decision Tree, Random Forest, Support Vector Machine, Bayesian Network). Our results indicated that machine learning methods, especially Random Forest, can be used as the most effective classifier for heavy mineral classification.

ways to think about machine learning benedict evans

ways to think about machine learning benedict evans

We're now four or five years into the current explosion of machine learning, and pretty much everyone has heard of it. It's not just that startups are forming every day or that the big tech platform companies are rebuilding themselves around it -everyone outside tech has read the Economist or BusinessWeek cover story, and many big companies have some projects underway. We know this is a Next Big Thing.

Going a step further, we mostly understand what neural networks might be, in theory, and we get that this might be about patterns and data. Machine learning lets us find patterns or structures in data that are implicit and probabilistic (hence inferred) rather than explicit, that previously only people and not computers could find. They address a class of questions that were previously hard for computers and easy for people, or, perhaps more usefully, hard for people to describe to computers. And weve seen some cool (or worrying, depending on your perspective) speech and vision demos.

I don't think, though, that we yet have a settled sense of quite what machine learning means -what it will mean for tech companies or for companies in the broader economy, how to think structurally about what new things it could enable, or what machine learning means for all the rest of us, and what important problems it might actually be able to solve.

This isn't helped by the term 'artificial intelligence', which tends to end any conversation as soon as it's begun. As soon as we say 'AI', it's as though the black monolith from the beginning of 2001 has appeared, and we all become apes screaming at it and shaking our fists. You cant analyze AI.

Why relational databases? They were a new fundamental enabling layer that changed what computing could do. Before relational databases appeared in the late 1970s, if you wanted your database to show you, say, 'all customers who bought this product and live in this city', that would generally need a custom engineering project. Databases were not built with structure such that any arbitrary cross-referenced query was an easy, routine thing to do. If you wanted to ask a question, someone would have to build it. Databases were record-keeping systems; relational databases turned them into business intelligence systems.

This changed what databases could be used for in important ways, and so created new use cases and new billion dollar companies. Relational databases gave us Oracle, but they also gave us SAP, and SAP and its peers gave us global just-in-time supply chains - they gave us Apple and Starbucks. By the 1990s,pretty much all enterprise software was a relational database - PeopleSoft and CRM and SuccessFactors and dozens more all ran on relational databases. No-one looked at SuccessFactors or Salesforce and said "that will never work because Oracle has all the database" - rather, this technology became an enabling layer that was part of everything.

So, this is a good grounding way to think about ML today - its a step change in what we can do with computers, and that will be part of many different products for many different companies. Eventually, pretty much everything will have ML somewhere inside and no-one will care.

An important parallel here is that though relational databases had economy of scale effects, there were limited network or winner takes all effects. The database being used by company A doesn't get better if company B buys the same database software from the same vendor: Safeway's database doesn't get better if Caterpillar buys the same one. Much the same actually applies to machine learning: machine learning is all about data, but data is highly specific to particular applications. More handwriting data will make a handwriting recognizer better, and more gas turbine data will make a system that predicts failures in gas turbines better, but the one doesn't help with the other. Data isnt fungible.

This gets to the heart of the most common misconception that comes up in talking about machine learning - that it is in some way a single, general purpose thing, on a path to HAL 9000, and that Google or Microsoft have each built *one*, or that Google 'has all the data', or that IBM has an actual thing called Watson. Really, this is always the mistake in looking at automation: with each wave of automation, we imagine we're creating something anthropomorphic or something with general intelligence. In the 1920s and 30s we imagined steel men walking around factories holding hammers, and in the 1950s we imagined humanoid robots walking around the kitchen doing the housework. We didn't get robot servants - we got washing machines.

Washing machines arerobots, but they're not intelligent. They don't know what water or clothes are. Moreover, they're not general purpose even in the narrow domain of washing - you can't put dishes in a washing machine, nor clothes in a dishwasher (or rather, you can, but you wont get the result you want). They're just another kind of automation, no different conceptually to a conveyor belt or a pick-and-place machine. Equally, machine learning lets us solve classes of problem that computers could not usefully address before, but each of those problems will require a different implementation, and different data, a different route to market, and often a different company. Each of them is a piece of automation. Each of them is a washing machine.

Hence, one of the challenges in talking about machine learning is to find the middle ground between a mechanistic explanation of the mathematics on one hand and fantasies about general AI on the other. Machine learning is not going to create HAL 9000 (at least, very few people in the field think that it will do so any time soon), but its also not useful to call it just statistics. Returning to the parallels with relational databases, this might be rather like talking about SQL in 1980 - how do you get from explaining table joins to thinking about Salesforce.com? It's all very well to say 'this lets you ask these new kinds of questions', but it isn't always very obvious what questions. You can do impressive demos of voice recognition and image recognition, but again, what would a normal company do with that? As a team at a major US media company said to me a while ago: 'well, we know we can use ML to index ten years of video of our talent interviewing athletes - but what do we look for?

What, then, are the washing machines of machine learning, for real companies? I think there are two sets of tools for thinking about this. The first is to think in terms of a procession of types of data and types of question:

Machine learning may well deliver better results for questions you're already asking about data you already have, simply as an analytic or optimization technique. For example, our portfolio company Instacart built a system to optimize the routing of its personal shoppers through grocery stores that delivered a 50% improvement (this was built by just three engineers, using Google's open-source tools Keras and Tensorflow).

Machine learning lets you ask new questions of the data you already have. For example, a lawyer doing discovery might search for 'angry emails, or 'anxious or anomalous threads or clusters of documents, as well as doing keyword searches,

Within this, I find imaging much the most exciting. Computers have been able to process text and numbers for as long as weve had computers, but images (and video) have been mostly opaque. Now theyll be able to see in the same sense as they can read. This means that image sensors (and microphones) become a whole new input mechanism - less a camera than a new, powerful and flexible sensor that generates a stream of (potentially) machine-readable data. All sorts of things will turn out to be computer vision problems that dont look like computer vision problems today.

This isnt about recognizing cat pictures. I met a company recently that supplies seats to the car industry, which has put a neural network on a cheap DSP chip with a cheap smartphone image sensor, to detect whether theres a wrinkle in the fabric (we should expect all sorts of similar uses for machine learning in very small, cheap widgets, doing just one thing, as described here). Its not useful to describe this as artificial intelligence: its automation of a task that could not previously be automated. A person had to look.

This sense of automation is the second tool for thinking about machine learning. Spotting whether theres a wrinkle in fabric doesn't need 20 years of experience - it really just needs a mammal brain. Indeed, one of my colleagues suggested that machine learning will be able to do anything you could train a dog to do, which is also a useful way to think about AI bias (What exactly has the dog learnt? What was in the training data? Are you sure? How do you ask?), but also limited because dogs do have general intelligence and common sense, unlike any neural network we know how to build. Andrew Ng has suggested that ML will be able to do anything you could do in less than one second. Talking about ML does tend to be a hunt for metaphors, but I prefer the metaphor that this gives you infinite interns, or, perhaps, infinite ten year olds.

Five years ago, if you gave a computer a pile of photos, it couldnt do much more than sort them by size. A ten year old could sort them into men and women, a fifteen year old into cool and uncool and an intern could say this ones really interesting. Today, with ML, the computer will match the ten year old and perhaps the fifteen year old. It might never get to the intern.But what would you do if you had a million fifteen year olds to look at your data?What calls would you listen to, what images would you look at, and what file transfers or credit card payments would you inspect?

That is, machine learning doesn't have to match experts or decades of experience or judgement. Were not automating experts. Rather, were asking listen to all the phone calls and find the angry ones. Read all the emails and find the anxious ones. Look at a hundred thousand photos and find the cool (or at least weird) people.

In a sense, this is what automation always does; Excel didn't give us artificial accountants, Photoshop and Indesign didnt give us artificial graphic designers and indeed steam engines didnt give us artificial horses. (In an earlier wave of AI, chess computers didnt give us a grumpy middle-aged Russian in a box.) Rather, we automated one discrete task, at massive scale.

Where this metaphor breaks down (as all metaphors do) is in the sense that in some fields, machine learning can not just find things we can already recognize, but find things that humans cant recognize, or find levels of pattern, inference or implication that no ten year old (or 50 year old) would recognize. This is best seen Deepminds AlphaGo. AlphaGo doesnt play Go the way the chess computers played chess - by analysing every possible tree of moves in sequence. Rather, it was given the rules and a board and left to try to work out strategies by itself, playing more games against itself than a human could do in many lifetimes. That is, this not so much a thousand interns as one intern thats very very fast, and you give your intern 10 million images and they come back and say its a funny thing, but when I looked at the third million images, this pattern really started coming out. So, what fields are narrow enough that we can tell an ML system the rules (or give it a score), but deep enough that looking at all of the data, as no human could ever do, might bring out new results?

I spend quite a lot of time meeting big companies and talking about their technology needs, and they generally have some pretty clear low hanging fruit for machine learning. There are lots of obvious analysis and optimisation problems, and plenty of things that are clearly image recognition problems or audio analysis questions. Equally, the only reason were talking about autonomous cars and mixed reality is because machine learning (probably) enables them - ML offers a path for cars to work out whats around them and what human drivers might be going to do, and offers mixed reality a way to work out what I should be seeing, if Im looking though a pair of glasses that could show anything. But after weve talked about wrinkles in fabric or sentiment analysis in the call center, these companies tend to sit back and ask, well, what else? What are the other things that this will enable, and what are the unknown unknowns that it will find? Weve probably got ten to fifteen years before that starts getting boring.

Once a week, I send a newsletter to 160,000 people - what happened in tech that actually mattered, and what it means. I pick out the changes and ideas you dont want to miss in all the noise, and give them context and analysis.

ml related product | onpassive

ml related product | onpassive

Empower and Engage your data science platform by collaborating most effectively with data architects and business team to build and ceaselessly improve machine learning models with appropriate governance set up in place.

ONPASSIVE endeavours to attract the best worldwide ability focused on research brilliance in deep learning and machine learning. Our ML experts and analysts are a dynamic network of imaginative issue solvers, working across disciplines on both interests driven and associated research.

Every product that is built and envisioned depends on simplifying and bettering every task that we do now. As the science of Machine Learning develops further, our products will evolve and develop offering better highlights.

ONPASSIVE uses cohesive and robust technology in building its products. Its deep learning algorithms guarantee that every task you do today will enhance your machine tomorrows efficiency and productivity. Deep data analytics improves logical response system creating sentient machines that redefine the relationship we have with them.

ONPASSIVE aims to introduce products that are enabled by Unsupervised learning. It results in building a whole new league of applications that performs with minimum human intervention. We are moving on to a new era of technology, and ONPASSIVE intends to lead this move with a technology that reshapes the
entire ecosystem of Intelligent Sentient Machines.

We are here to truly democratize AI. Machine learning organizations today have been focused on assisting with building AI models, yet to truly democratize Artificial Intelligence and make it ready and accessible to everyone, organizations should be prepared to also manage AI, not only build it. The AI talent gap is real, and we will help address it by reducing the reliance of expert engineers in managing AI frameworks.

machine learning for risk and returns | refinitiv perspectives

machine learning for risk and returns | refinitiv perspectives

Machine learning for risk management and compliance has been driving the adoption of AI-based technology among financial services firms. But as our survey reveals, #MLReadyData is also starting to have an impact in the pursuit of alpha.

Our recent survey of senior personnel in global finance, including c-suite executives and data scientists, revealed that 90 percent of respondents said they had deployed machine learning in one or more departments.

The number one application cited by survey respondents was risk management, chosen by 84 percent of respondents, well ahead of investment ideas and generation (62 percent). However, that balance is shifting as AI technology matures.

By using these technologies to better understand price fluctuations, measure exposure and model how events would impact different investment scenarios, portfolio managers and traders are reducing risk and protecting investments.

From client onboarding and monitoring to investigations, machine learning is being used to search and merge multiple data sources, including adverse media, to better understand the risks associated with an individual or entity.

Finally, we are seeing the advanced application of machine learning to help identify previously unseen patterns and networks of entities, allowing institutions to be more proactive in financial crime identification and prevention.

When it comes to investment ideas and generation, traders are under increasing pressure to create an edge and so are using machine learning to mine new data sources, identify patterns, search forsignals and make predictions in the search for market-beating strategies.

With billions of client dollars at stake, moving AI-driven investment strategies from the drawing board to the trading floor can be challenging, with investors needing assurance over the robustness of new models.

Finally, no matter how well built the model, its output will only be as accurate as the data that feeds it. In fact, our survey identified poor data quality and availability as the key challenges to wider machine learning adoption.This is where Refinitiv can help with #MLReadyData.

Until recently, highly experienced data scientists needed to research and hand build machine learning models from scratch. Now open source frameworks such as TensorFlow, PyTorch and Scikit-learn make it possible to rapidly create and test new models.

At the same time, greater collaboration and transparency will help to address current sticking points. All this should provide fertile ground for the further growth of effective machine learning applications.

We predict that artificial intelligence will be the single greatest enabler of competitive advantage in the financial services sector and Refinitiv can help you to take full advantage of machine learning for risk and returns.

I would like to receive the Refinitiv Perspectives newsletter. Please select this checkbox.

I'd like to receive communications about Refinitiv resources, events, products, or services. (Please note you can manage and update your preferences at any time.)

By submitting your details, you are agreeing to receive communications about Refinitiv resources, events, products, or services. You also acknowledge that you have read and understood our privacy statement.

data science and machine learning: making data driven decisions

data science and machine learning: making data driven decisions

The Data Science and Machine Learning: Making Data-Driven Decisions Program has a curriculum carefully crafted by MIT faculty to provide you with the skills & knowledge to apply data science techniques to help you make data-driven decisions. This data science course has been designed for the needs of working professionals looking to grow their careers in the data science field with solid conceptual foundations and a deep understanding of how to problem solve using the most relevant algorithms and techniques across statistics, machine learning, deep learning, network analytics, recommendation systems, and more.

Python, for data scientists and machine learning specialists, is a lingua franca owing to the immense promise of this widely-used programming language. To strengthen your Python foundations, this module focuses on NumPy, Pandas and Data Visualization.

This week will help you understand the role of statistics in helping organizations take effective decisions, learn its most widely-used tools and learn to solve business problems using analysis, data interpretation and experiments. It will cover the following topics:

Here, you will learn about linear and nonlinear regression together with their extensions, including the important case of logistic regression for binary classification and causal inference where the goal is to understand the effects of actively manipulating a variable as opposed to passively measuring it.

Here, you will learn about the modern regression with high-dimensional data, or finding a needle in a haystack: for large datasets, it becomes necessary to sort out which variables are relevant for prediction and which are not. Recent years have witnessed the development of new statistical techniques, such as Lasso or Random Forests, that are computationally superior to large datasets and that automatically select relevant data.

This part will cover regression and causal inference to explain why correlation does not imply causation and how we can overcome this intrinsic limitation of regression by resorting to randomized control studies or controlling for confounding.

In this week, you will learn about the basics of anomaly detection, classification, and fundamentals of hypothesis testing, which is the formalization of scientific inquiry. This delicate statistical setup obeys a certain set of rules that will be explained and put in context with classification.

Deep learning has been emerging as a driving force in the ongoing technological revolution. The essence of Deep Learning lies in its ability to imitate the human brain in processing data for various purposes, that too without any human supervision. Neural networks are at the heart of this technology. This week will take you beyond traditional ML and into the realm of Neural Networks and Deep Learning. Youll learn how Deep Learning can be successfully applied to areas such as Computer Vision, and more.

As organizations increasingly lean towards data-driven approaches, an understanding of recommendation systems can help not only data science experts but also professionals in other areas such as marketing who, too, are expected to be data literate today. Learn why recommendation systems are now everywhere and some insight on what is required to build a good recommendation system by covering statistical modeling and algorithms.

Collaborative filtering is an aspect of recommendation systems with which we interact quite frequently. Upon collecting data on preferences of multiple users, collaborative filtering makes predictions for the choice of a particular user.

In this week, you will get a systematic overview of methods for analyzing large networks, determining important structures in such networks, and inferring missing data in networks. An emphasis is placed on graphical models, both, as a powerful way to model network processes and to facilitate efficient statistical computation.

Here, you will learn about the common descriptive measures of a network, such as a centrality, closeness, & betweenness, and standard stochastic models for networks, such as Erdos-Renyi, preferential attachment, infection models, notions of influence, etc.

In this week, you will learn about some practical examples of temporal data sources and how we can begin to understand them. Then, you will dive into several strategies for feature extraction, including Deep Feature Synthesis with primitives and stacking. Finally, you will look toward models for the real world and how to ensure they successfully predict future data.

In this part, you will know how to utilize feature engineering techniques to extract meaningful insights from temporal data? What are effective strategies for evaluating model performance and preparing to deploy it in the real world?

Following a learn by doing pedagogy, the Data Science and Machine Learning Program offers you the opportunity to construct your understanding through solving real-world case studies and practice activities. Below are samples of potential project topics and case studies.

Learn from the vast knowledge of top MIT faculty in the field of Data Science and Machine Learning, along with experienced data science and machine learning practitioners from leading global organizations.

The Data Science and Machine Learning: Making Data-Driven Decisions Program is distinguished by its unique combination of MIT academic leadership, recorded lectures by MIT faculty, an application-based pedagogy, and personalized mentorship from industry experts.

Yes, the program has been designed keeping in mind the needs of working professionals. Thus, you can learn the practical applications of data science and machine learning from the convenience of your home and within an efficient 10-week duration.

Each week involves 2 hours of recorded lectures which includes hands-on practical applications and problem-solving. along with 2 hours of mentored learning sessions. Additionally, based on your background, you should expect to invest between 2 to 4 hours every week to self-study and practice. So that amounts to a time commitment of 6-8 hours per week.

No. All the requisite learning material is provided online to candidates through the Learning Management System.But given this field is vast and ever expanding, there is always more you can read and there will be a list of recommended books and other resources for your deep dive reading pleasure

Please note that submitting the admission fee does constitute enrolling in the program and the below cancellation penalties will be applied. If you are unable to attend your program, please review our dropout, and refund policies below.

You will need to complete your online application form. On receiving the application, the Great Learning program team will review it to determine your fit with the program. If selected, you will receive an offer for the upcoming cohort. Secure your seat by paying the fee.

Please fill in the form and a Program Advisor from Great Learning will reach out to you. You can also reach out to us at [email protected] or +1 617 539 7216

By submitting the form, you agree to Great Learning's Terms and Conditions and Privacy Policy. You agree to receive communications from MIT IDSS & Great Learning about this program and other relevant programs.

This program is delivered in collaboration with Great Learning. Great Learning is a professional learning company with a global footprint in 140+ countries. It's mission is to make professionals around the globe proficient and future-ready. Great Learning collaborates with MIT IDSS and provides industry experts, student counsellors, course support and guidance to ensure students get hands-on training and live personalized mentorship on the application of concepts taught by the MIT IDSS faculty.

Artificial Intelligence Machine Learning Data Science Business Analytics Cloud Computing Cyber Security Design Thinking Digital Marketing Python Programming Big Data Business Management DevOps Career Enhancement Interview Questions

machine learning-based seismic spectral attribute analysis to delineate a tight-sand reservoir - advances in engineering

machine learning-based seismic spectral attribute analysis to delineate a tight-sand reservoir - advances in engineering

The Sulige gas field in central Ordos Basin is the largest gas field in China. The first exploratory well produced natural gas from a tight sandstone reservoir, namely the P1h8 reservoir. In the early phase of the project, 2D seismic lines were acquired to determine the gas potential. However, since 2010, 3D seismic surveys have been acquired in the Sulige gas field to better characterize reservoir heterogeneity and to optimize the placement of the expensive horizontal well. At present, horizontal wells yield more gas when compared to their vertical counterparts. Nonetheless, the number of horizontal wells is much smaller than that of vertical wells. To further enhance gas production, more horizontal wells ought to be drilled. Moreover, since massive sands at the braid bars are the potential targets for horizontal wells, it is important to better map and predict the distribution of economically viable tight sandstones throughout the field. In light of this, many high-quality 3D seismic approaches have been presented. For instance, previous reports have explored approaches such as the short time Fourier transform, continuous wavelet transform, and matching pursuit decomposition. Regardless, more is needed to delineate the subsurface depositional facies and reservoir thicknesses. Considering the linear or nonlinear correlation between seismic spectral attribute (SSAs) and sand thickness in the Sulige gas field, one can postulate that a combination of both multiple linear regression (MLR) and radial basis function neural network (RBFNN) could yield a better optimal regression model. Bearing this in mind, researchers from the National Engineering Laboratory for Offshore Oil Exploration in China: Professor Zhiguo Wang and Professor Jinghuai Gao, in collaboration with Professor Dengliang Gao at the West Virginia University in the United States and Dr. Xiaolan Lei, and Professor Daxing Wang at the CNPC Changqing Oilfield Company (China) exploited the advantages of the combined MLR and RBFNN by applying a machine learning-based spectral attribute analysis to assist 3D seismic interpretation in tight gas reservoirs in the Sulige field, Ordos Basin, western China. Their work is currently published in the research journal, Marine and Petroleum Geology. In their workflow, they first implemented the seismic spectral decomposition by using the continuous wavelet transform with the generalized Morse wavelets. Second, they extracted SSAs of the target reservoir of interest following which they performed multi-dimensional data analysis using the principal component analysis, thus significantly reducing the computational time and storage space for SSAs analysis and visualization. In addition, through the use of red-green-blue (RGB) blending technique, the team proceeded to make a high-resolution subsurface depositional facies map from the reduced three principal components from the original multi-dimensional SSAs. The research team reported that validation analysis of the 9 blind test wells revealed that the increased heterogeneity association with seismic facies changed and the lack of training wells could lead to false thickness predictions. Moreover, based on a geological model of the P1h8 reservoir, the correlation between spectral attributes and thickness of sands were found. In summary, the study used mediocre-quality 3D seismic data, upon which they successfully applied a spectral attribute analysis workflow by using the continuous wavelet transform with generalized Morse wavelets and artificial neural network to improve the delineation of sandstones in the Lower Permian Xiashihezi Formation, Ordos Basin, China. Remarkably, the results obtained illustrated significant variation in reservoir thickness across the field, which can be useful for evaluating reservoir heterogeneity and connectivity. In a statement to Advances in Engineering, the authors highlighted that their machine-aided multi-dimensional SSAs analysis could be useful for play screening in the reconnaissance phase, prospect generation and maturation in the exploration phase, and well placement in the development phase.

The Sulige gas field in central Ordos Basin is the largest gas field in China. The first exploratory well produced natural gas from a tight sandstone reservoir, namely the P1h8 reservoir. In the early phase of the project, 2D seismic lines were acquired to determine the gas potential. However, since 2010, 3D seismic surveys have been acquired in the Sulige gas field to better characterize reservoir heterogeneity and to optimize the placement of the expensive horizontal well. At present, horizontal wells yield more gas when compared to their vertical counterparts. Nonetheless, the number of horizontal wells is much smaller than that of vertical wells. To further enhance gas production, more horizontal wells ought to be drilled. Moreover, since massive sands at the braid bars are the potential targets for horizontal wells, it is important to better map and predict the distribution of economically viable tight sandstones throughout the field. In light of this, many high-quality 3D seismic approaches have been presented. For instance, previous reports have explored approaches such as the short time Fourier transform, continuous wavelet transform, and matching pursuit decomposition. Regardless, more is needed to delineate the subsurface depositional facies and reservoir thicknesses.

Considering the linear or nonlinear correlation between seismic spectral attribute (SSAs) and sand thickness in the Sulige gas field, one can postulate that a combination of both multiple linear regression (MLR) and radial basis function neural network (RBFNN) could yield a better optimal regression model. Bearing this in mind, researchers from the National Engineering Laboratory for Offshore Oil Exploration in China: Professor Zhiguo Wang and Professor Jinghuai Gao, in collaboration with Professor Dengliang Gao at the West Virginia University in the United States and Dr. Xiaolan Lei, and Professor Daxing Wang at the CNPC Changqing Oilfield Company (China) exploited the advantages of the combined MLR and RBFNN by applying a machine learning-based spectral attribute analysis to assist 3D seismic interpretation in tight gas reservoirs in the Sulige field, Ordos Basin, western China. Their work is currently published in the research journal, Marine and Petroleum Geology.

In their workflow, they first implemented the seismic spectral decomposition by using the continuous wavelet transform with the generalized Morse wavelets. Second, they extracted SSAs of the target reservoir of interest following which they performed multi-dimensional data analysis using the principal component analysis, thus significantly reducing the computational time and storage space for SSAs analysis and visualization. In addition, through the use of red-green-blue (RGB) blending technique, the team proceeded to make a high-resolution subsurface depositional facies map from the reduced three principal components from the original multi-dimensional SSAs.

The research team reported that validation analysis of the 9 blind test wells revealed that the increased heterogeneity association with seismic facies changed and the lack of training wells could lead to false thickness predictions. Moreover, based on a geological model of the P1h8 reservoir, the correlation between spectral attributes and thickness of sands were found.

In summary, the study used mediocre-quality 3D seismic data, upon which they successfully applied a spectral attribute analysis workflow by using the continuous wavelet transform with generalized Morse wavelets and artificial neural network to improve the delineation of sandstones in the Lower Permian Xiashihezi Formation, Ordos Basin, China. Remarkably, the results obtained illustrated significant variation in reservoir thickness across the field, which can be useful for evaluating reservoir heterogeneity and connectivity. In a statement to Advances in Engineering, the authors highlighted that their machine-aided multi-dimensional SSAs analysis could be useful for play screening in the reconnaissance phase, prospect generation and maturation in the exploration phase, and well placement in the development phase.

About the author Zhiguo Wang is currently an Associate Professor with the School of the School of Mathematics and Statistics, Xian Jiaotong University. He was a Visiting Scholar (2016-2017) with the Electrical and Computer Engineering, Duke University, USA. He was the recipient of SEG/ExxonMobil SEP Travel Award in 2009, SEG/Chevron SLS Travel Award in 2009, SEG Annual Meeting Special Global Technical Session Travel Grant in 2014, and Science and Technological Advancement Award from Ministry of Education in 2016. He is, and has been, principal investigator in several NSFC projects. He serves as a member for SEG Membership Committee, SEG Emerging Professionals International Committee, SEG Travel Grant Committee and SEG EVOLVE Technical Committee. His research is mainly focused on seismic signal analysis, seismic machine learning, seismic geomorphology and seismic interpretation for oil and gas industry. He is an active member of SEG and AAPG. Webpage About the author Professor Dengliang Gao received a PhD (1997) in geology and geophysics from Duke University. He is a Professor of Geology and Geophysics at West Virginia University, was an adjunct professor at University of Houston (2007) and a lecturer at Tongji University (1986-1991) (China). Before joining the faculty at West Virginia University in 2009, he worked at Chevron Energy Technology Company (2008-2009), Marathon Oil Company (1998-2007), and Exxon Production Research Company (1997-1998). He was the recipient of two US patents, and was selected to receive the Robert H. Dott Sr. Memorial Award (2015) from AAPG, the DOE/NETL-RUA Outstanding Research Award (2013) from URS, two Grover E. Murray best (second place) published paper awards (2006) from GCAGS, and Science and Technological Advancement Award (1991) from Chinas Education Commission. He was twice recognized as an outstanding GEOPHYSICS associate editor (2007, 2008) and outstanding GEOPHYSICS peer reviewer (2006) by SEG. He served as INTERPRETATION special section editor (2015, 2017, 2018, 2019, 2020), GEOPHYSICS associate editor (20062015), AAPG special publication editor (20092012), on the AAPG Publications Committee (20062009), and as the Co-chair for the 31th Annual GCSSEPM Foundation Bob F. Perkins Research Conference on seismic attributes (2011). His research interests include seismic texture and seismic structure analysis for subsurface characterization. About the author Daxing Wang is currently a Professor-level Senior Engineer with the Exploration and Development Research Institute of PetroChina Changqing Oilfield Company, Xian, China. He received the B.S. and M.S. degrees in geophysical exploration from the Southwest Petroleum University, Sichuan, China, in 1983 and 1995, respectively, and the Ph.D. degree in solid geophysics from the Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, China, in 2005. His research interests include reservoir characterization and hydrocarbon detection. Dr. Wang was the recipient of the first prize of science and technology award of Shaanxi province in 2017. About the author Jinghuai Gao is currently a distinguished professor of flying talent Professor with the School of Electronic and Information Engineering and the School of Mathematics and Statistics, Xian Jiaotong University. He received the M.S. degree in applied geophysics from Changan University, Xian, China, in 1991, and the Ph.D. degree in electromagnetic field and microwave technology from Xian Jiaotong University, Xian, in 1997. From 1997 to 2000, he was a Post-Doctoral Researcher with the Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, China. In 1999, he was a Visiting Scientist with the Modeling and Imaging Laboratory, University of California at Santa Cruz, USA. He is also an Associate Director with the National Engineering Laboratory for Offshore Oil Exploration, Xian Jiaotong University. He is the principal investigator of the fundamental theory and method for geophysical exploration and development of unconventional oil and gas, which is a major program of the National Natural Science Foundation of China under Grant 41390450. His research interests include seismic wave propagation and imaging theory, seismic reservoir and fluid identification, and seismic inverse problem theory and method. Dr. Gao was the recipient of the first prize of scientific and technological progress, Ministry of Education in 2016, the Chen Zongqi Geophysical Best Paper Award in 2013, and 37 China patents. He has published more than 170 peer reviewed journal papers and chapters. He serves as an associated editor of IEEE Transaction on Geoscience and Remote Sensing, and an editorial board member of the Chinese Journal of Geophysics. He is an active member of IEEE and SEG. Reference Zhiguo Wang, Dengliang Gao, Xiaolan Lei, Daxing Wang, Jinghuai Gao. Machine learning-based seismic spectral attribute analysis to delineate a tight-sand reservoir in the Sulige gas field of central Ordos Basin, western China. Marine and Petroleum Geology, volume 113 (2020) 104136. Go To Marine and Petroleum Geology

Zhiguo Wang is currently an Associate Professor with the School of the School of Mathematics and Statistics, Xian Jiaotong University. He was a Visiting Scholar (2016-2017) with the Electrical and Computer Engineering, Duke University, USA. He was the recipient of SEG/ExxonMobil SEP Travel Award in 2009, SEG/Chevron SLS Travel Award in 2009, SEG Annual Meeting Special Global Technical Session Travel Grant in 2014, and Science and Technological Advancement Award from Ministry of Education in 2016. He is, and has been, principal investigator in several NSFC projects. He serves as a member for SEG Membership Committee, SEG Emerging Professionals International Committee, SEG Travel Grant Committee and SEG EVOLVE Technical Committee. His research is mainly focused on seismic signal analysis, seismic machine learning, seismic geomorphology and seismic interpretation for oil and gas industry. He is an active member of SEG and AAPG.

About the author Professor Dengliang Gao received a PhD (1997) in geology and geophysics from Duke University. He is a Professor of Geology and Geophysics at West Virginia University, was an adjunct professor at University of Houston (2007) and a lecturer at Tongji University (1986-1991) (China). Before joining the faculty at West Virginia University in 2009, he worked at Chevron Energy Technology Company (2008-2009), Marathon Oil Company (1998-2007), and Exxon Production Research Company (1997-1998). He was the recipient of two US patents, and was selected to receive the Robert H. Dott Sr. Memorial Award (2015) from AAPG, the DOE/NETL-RUA Outstanding Research Award (2013) from URS, two Grover E. Murray best (second place) published paper awards (2006) from GCAGS, and Science and Technological Advancement Award (1991) from Chinas Education Commission. He was twice recognized as an outstanding GEOPHYSICS associate editor (2007, 2008) and outstanding GEOPHYSICS peer reviewer (2006) by SEG. He served as INTERPRETATION special section editor (2015, 2017, 2018, 2019, 2020), GEOPHYSICS associate editor (20062015), AAPG special publication editor (20092012), on the AAPG Publications Committee (20062009), and as the Co-chair for the 31th Annual GCSSEPM Foundation Bob F. Perkins Research Conference on seismic attributes (2011). His research interests include seismic texture and seismic structure analysis for subsurface characterization. About the author Daxing Wang is currently a Professor-level Senior Engineer with the Exploration and Development Research Institute of PetroChina Changqing Oilfield Company, Xian, China. He received the B.S. and M.S. degrees in geophysical exploration from the Southwest Petroleum University, Sichuan, China, in 1983 and 1995, respectively, and the Ph.D. degree in solid geophysics from the Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, China, in 2005. His research interests include reservoir characterization and hydrocarbon detection. Dr. Wang was the recipient of the first prize of science and technology award of Shaanxi province in 2017. About the author Jinghuai Gao is currently a distinguished professor of flying talent Professor with the School of Electronic and Information Engineering and the School of Mathematics and Statistics, Xian Jiaotong University. He received the M.S. degree in applied geophysics from Changan University, Xian, China, in 1991, and the Ph.D. degree in electromagnetic field and microwave technology from Xian Jiaotong University, Xian, in 1997. From 1997 to 2000, he was a Post-Doctoral Researcher with the Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, China. In 1999, he was a Visiting Scientist with the Modeling and Imaging Laboratory, University of California at Santa Cruz, USA. He is also an Associate Director with the National Engineering Laboratory for Offshore Oil Exploration, Xian Jiaotong University. He is the principal investigator of the fundamental theory and method for geophysical exploration and development of unconventional oil and gas, which is a major program of the National Natural Science Foundation of China under Grant 41390450. His research interests include seismic wave propagation and imaging theory, seismic reservoir and fluid identification, and seismic inverse problem theory and method. Dr. Gao was the recipient of the first prize of scientific and technological progress, Ministry of Education in 2016, the Chen Zongqi Geophysical Best Paper Award in 2013, and 37 China patents. He has published more than 170 peer reviewed journal papers and chapters. He serves as an associated editor of IEEE Transaction on Geoscience and Remote Sensing, and an editorial board member of the Chinese Journal of Geophysics. He is an active member of IEEE and SEG. Reference Zhiguo Wang, Dengliang Gao, Xiaolan Lei, Daxing Wang, Jinghuai Gao. Machine learning-based seismic spectral attribute analysis to delineate a tight-sand reservoir in the Sulige gas field of central Ordos Basin, western China. Marine and Petroleum Geology, volume 113 (2020) 104136. Go To Marine and Petroleum Geology

Professor Dengliang Gao received a PhD (1997) in geology and geophysics from Duke University. He is a Professor of Geology and Geophysics at West Virginia University, was an adjunct professor at University of Houston (2007) and a lecturer at Tongji University (1986-1991) (China). Before joining the faculty at West Virginia University in 2009, he worked at Chevron Energy Technology Company (2008-2009), Marathon Oil Company (1998-2007), and Exxon Production Research Company (1997-1998). He was the recipient of two US patents, and was selected to receive the Robert H. Dott Sr. Memorial Award (2015) from AAPG, the DOE/NETL-RUA Outstanding Research Award (2013) from URS, two Grover E. Murray best (second place) published paper awards (2006) from GCAGS, and Science and Technological Advancement Award (1991) from Chinas Education Commission.

He was twice recognized as an outstanding GEOPHYSICS associate editor (2007, 2008) and outstanding GEOPHYSICS peer reviewer (2006) by SEG. He served as INTERPRETATION special section editor (2015, 2017, 2018, 2019, 2020), GEOPHYSICS associate editor (20062015), AAPG special publication editor (20092012), on the AAPG Publications Committee (20062009), and as the Co-chair for the 31th Annual GCSSEPM Foundation Bob F. Perkins Research Conference on seismic attributes (2011). His research interests include seismic texture and seismic structure analysis for subsurface characterization.

About the author Daxing Wang is currently a Professor-level Senior Engineer with the Exploration and Development Research Institute of PetroChina Changqing Oilfield Company, Xian, China. He received the B.S. and M.S. degrees in geophysical exploration from the Southwest Petroleum University, Sichuan, China, in 1983 and 1995, respectively, and the Ph.D. degree in solid geophysics from the Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, China, in 2005. His research interests include reservoir characterization and hydrocarbon detection. Dr. Wang was the recipient of the first prize of science and technology award of Shaanxi province in 2017. About the author Jinghuai Gao is currently a distinguished professor of flying talent Professor with the School of Electronic and Information Engineering and the School of Mathematics and Statistics, Xian Jiaotong University. He received the M.S. degree in applied geophysics from Changan University, Xian, China, in 1991, and the Ph.D. degree in electromagnetic field and microwave technology from Xian Jiaotong University, Xian, in 1997. From 1997 to 2000, he was a Post-Doctoral Researcher with the Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, China. In 1999, he was a Visiting Scientist with the Modeling and Imaging Laboratory, University of California at Santa Cruz, USA. He is also an Associate Director with the National Engineering Laboratory for Offshore Oil Exploration, Xian Jiaotong University. He is the principal investigator of the fundamental theory and method for geophysical exploration and development of unconventional oil and gas, which is a major program of the National Natural Science Foundation of China under Grant 41390450. His research interests include seismic wave propagation and imaging theory, seismic reservoir and fluid identification, and seismic inverse problem theory and method. Dr. Gao was the recipient of the first prize of scientific and technological progress, Ministry of Education in 2016, the Chen Zongqi Geophysical Best Paper Award in 2013, and 37 China patents. He has published more than 170 peer reviewed journal papers and chapters. He serves as an associated editor of IEEE Transaction on Geoscience and Remote Sensing, and an editorial board member of the Chinese Journal of Geophysics. He is an active member of IEEE and SEG. Reference Zhiguo Wang, Dengliang Gao, Xiaolan Lei, Daxing Wang, Jinghuai Gao. Machine learning-based seismic spectral attribute analysis to delineate a tight-sand reservoir in the Sulige gas field of central Ordos Basin, western China. Marine and Petroleum Geology, volume 113 (2020) 104136. Go To Marine and Petroleum Geology

Daxing Wang is currently a Professor-level Senior Engineer with the Exploration and Development Research Institute of PetroChina Changqing Oilfield Company, Xian, China. He received the B.S. and M.S. degrees in geophysical exploration from the Southwest Petroleum University, Sichuan, China, in 1983 and 1995, respectively, and the Ph.D. degree in solid geophysics from the Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, China, in 2005.

About the author Jinghuai Gao is currently a distinguished professor of flying talent Professor with the School of Electronic and Information Engineering and the School of Mathematics and Statistics, Xian Jiaotong University. He received the M.S. degree in applied geophysics from Changan University, Xian, China, in 1991, and the Ph.D. degree in electromagnetic field and microwave technology from Xian Jiaotong University, Xian, in 1997. From 1997 to 2000, he was a Post-Doctoral Researcher with the Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, China. In 1999, he was a Visiting Scientist with the Modeling and Imaging Laboratory, University of California at Santa Cruz, USA. He is also an Associate Director with the National Engineering Laboratory for Offshore Oil Exploration, Xian Jiaotong University. He is the principal investigator of the fundamental theory and method for geophysical exploration and development of unconventional oil and gas, which is a major program of the National Natural Science Foundation of China under Grant 41390450. His research interests include seismic wave propagation and imaging theory, seismic reservoir and fluid identification, and seismic inverse problem theory and method. Dr. Gao was the recipient of the first prize of scientific and technological progress, Ministry of Education in 2016, the Chen Zongqi Geophysical Best Paper Award in 2013, and 37 China patents. He has published more than 170 peer reviewed journal papers and chapters. He serves as an associated editor of IEEE Transaction on Geoscience and Remote Sensing, and an editorial board member of the Chinese Journal of Geophysics. He is an active member of IEEE and SEG. Reference Zhiguo Wang, Dengliang Gao, Xiaolan Lei, Daxing Wang, Jinghuai Gao. Machine learning-based seismic spectral attribute analysis to delineate a tight-sand reservoir in the Sulige gas field of central Ordos Basin, western China. Marine and Petroleum Geology, volume 113 (2020) 104136. Go To Marine and Petroleum Geology

Jinghuai Gao is currently a distinguished professor of flying talent Professor with the School of Electronic and Information Engineering and the School of Mathematics and Statistics, Xian Jiaotong University. He received the M.S. degree in applied geophysics from Changan University, Xian, China, in 1991, and the Ph.D. degree in electromagnetic field and microwave technology from Xian Jiaotong University, Xian, in 1997.

From 1997 to 2000, he was a Post-Doctoral Researcher with the Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, China. In 1999, he was a Visiting Scientist with the Modeling and Imaging Laboratory, University of California at Santa Cruz, USA. He is also an Associate Director with the National Engineering Laboratory for Offshore Oil Exploration, Xian Jiaotong University. He is the principal investigator of the fundamental theory and method for geophysical exploration and development of unconventional oil and gas, which is a major program of the National Natural Science Foundation of China under Grant 41390450. His research interests include seismic wave propagation and imaging theory, seismic reservoir and fluid identification, and seismic inverse problem theory and method.

Dr. Gao was the recipient of the first prize of scientific and technological progress, Ministry of Education in 2016, the Chen Zongqi Geophysical Best Paper Award in 2013, and 37 China patents. He has published more than 170 peer reviewed journal papers and chapters. He serves as an associated editor of IEEE Transaction on Geoscience and Remote Sensing, and an editorial board member of the Chinese Journal of Geophysics. He is an active member of IEEE and SEG.

Zhiguo Wang, Dengliang Gao, Xiaolan Lei, Daxing Wang, Jinghuai Gao. Machine learning-based seismic spectral attribute analysis to delineate a tight-sand reservoir in the Sulige gas field of central Ordos Basin, western China. Marine and Petroleum Geology, volume 113 (2020) 104136.

machine learning applications in detecting sand boils from images - sciencedirect

machine learning applications in detecting sand boils from images - sciencedirect

Highly accurate sand boil detection from image.Showcase automated levee monitoring.Stacking based robust machine learning algorithm.Comparisons of machine learning algorithms for sand boil detection.

Levees provide protection for vast amounts of commercial and residential properties. However, these structures require constant maintenance and monitoring, due to the threat of severe weather, sand boils, subsidence of land, seepage, etc. In this research, we focus on detecting sand boils. Sand boils occur when water under pressure wells up to the surface through a bed of sand. These make levees especially vulnerable. Object detection is a good approach to confirm the presence of sand boils from satellite or drone imagery, which can be utilized to assist in the automated levee monitoring methodology. Since sand boils have distinct features, applying object detection algorithms to it can result in accurate detection. To the best of our knowledge, this research work is the first approach to detect sand boils from images. In this research, we compare some of the latest deep learning methods, Viola-Jones algorithm, and other non-deep learning methods to determine the best performing one. We also train a Stacking-based machine learning method for the accurate prediction of sand boils. The accuracy of our robust model is 95.4%.

Get in Touch with Mechanic
Related Products
Recent Posts
  1. hard stone grinding mill sand making stone quarry

  2. sand milled for gelcoat

  3. sanding machine in singapore

  4. pellet making machine 8 start worm

  5. new barite sand making machine in brazil

  6. capturing the gold from black sand machine

  7. pellet making machine 30l

  8. how to make crusher drink

  9. sand making plant supplier in tamilnadu

  10. sandstone manufacturing process

  11. mining machine making process

  12. sandvi hammer mills

  13. crusher work asindex tests

  14. low price 3 pass rotary dryer for silica sand

  15. firms dealing with suppliing of mining equipment

  16. plants of cone crusher in india

  17. new good quality cheap china cone crusher price

  18. coal crusher conveyor in india

  19. portable gold ore jaw crusher suppliers south africa

  20. mining equipment new zealand