Moving data from big to smart
Collecting process data and using it correctly is a vital part of predictive maintenance programmes – and proves the adage that knowledge is power explains Erwin Weis, head of IoT Technology at SKF
Collecting data is one thing, but making sense of it is what adds value. In modern industrial parlance, it’s about turning ‘Big Data’ into ‘Smart Data’.
Big Data is often considered to be simply the vast amount of data – generated from sensors, devices, systems and other measurements equipment – which you then have to make sense of. However, it is actually a bit more than that.
Data does not all take the same form. Some is ‘structured’ – in the form of sensor output, for example – which can generally be organised in a database format. Other data are ‘unstructured’, and might include text, images, audio or video. The mixture of these two very different data sets is part of the complexity of ‘Big Data’.
In general Big Data is characterised by a new level of complexity and by requirements in terms of volume, velocity, variety and veracity, requiring new database systems to analyze and make use of the data.
The challenge is then to make sense of it and turn it into ‘Smart Data’. Enriching the raw data using knowledge and expertise is the way to achieve this. In an industrial context, this process is most often applied to operations and maintenance. Gathering lots of process data and interpreting it properly gives operators the information they need to improve running conditions. Correct sifting and interpretation of the data can help to improve machine performance or prolong its life, by adjusting conditions based on the results. At its simplest, an experienced operator might take several readings - temperature, pressure, and vibration for example – and make a ‘diagnosis’.
This structured data forms the basis of condition based monitoring (CBM) and predictive maintenance regimes. Taking the correct measurements, and taking action as soon as they stray from the norm, helps to keep machines running for longer. A simple example is that of vibration monitoring for bearings, in which a single data set can help to prolong machine lifetime and boost reliability.
SKF engineers recently helped the Scuderia Ferrari F1 team gather data from its test chambers in real time. A platform based on SKF’s IMx platform continuously monitored the vibration behaviour of drive components in the test chamber, processing up to 100,000 observations per second.
This data was collated up to 20 times per second – to break it into more manageable chunks – and analysed. This, says Scuderia, helped the team “focus on results rather than data”.
Before continuous monitoring, the team had to go into an individual test chamber to see exactly what was happening inside. Online checking of high frequency data – in real time – was impossible. This made troubleshooting a slow process – and made it impossible to create forecasts for the service life of components based on trend values.
SKF adapted its IMx platform to suit this application, as it had was more accustomed to monitoring applications such as wind turbines that required far lower data quantities, fewer channels and lower computer speeds.
Structured progress
Structured data can be interpreted automatically: if a certain parameter rises, for example, normal and abnormal behavior can be identified and it can be adjusted – or, a diagnosis made. The on-going challenge is to automate everything, including the unstructured data.
Today, customers are often given a written report on the behaviour of a machine. Based on the experience engineering specialists like SKF deliver many such reports to its clients every year. So, what if the results of these reports could be produced automatically and be used for improving analytics capabilities?
There are precedents for this. Machine vision systems, for instance, ‘know’ whether a defect is serious because they have been ‘shown’ many examples. The principle is used to check everything from products to quality inspection. In the not-too-distant past, such defects could only have been recognised by a human operator.
Now, a similar principle is at work for more complex machine problems. Automated systems will soon be able to interpret a mass of both structured and unstructured data and automatically diagnose the problem. It might compare a current picture with a historical one, for example, or extract data directly from a written report. With every text, image, audio or video the automated system will learn and will improve. At the same time the experts can focus on problems which are yet not known by the system and can trigger a supervised learning.
Of course, there are hurdles to overcome before we get to this point. While the hardware and software are pretty much in place, we still need all the systems – produced by different vendors – to communicate with one another seamlessly. Data access, exchange and interoperability has long been a concern, but there are signs that things are becoming more open. Especially end users, served by multiple suppliers, are pushing for systems working in harmony with one another.
Moving from big data to smart data, means moving from knowing what’s going on, to knowing what will happen, why its happen and what needs to be done. If we are able to get this insight in real-time then we create benefit and value for the industry.
Comments
Post a Comment