Automated Inspection and the Power of Data: A Panel Discussion from WRI 2014
by Jeff Tuzik
It wasn’t so long ago that measurement technology referred to calipers, gauges, or other tools. And it wasn’t so long ago that the data railroads collected on their properties was measured in pages instead of gigabytes. Now, the data flows constantly. Automated measurement technologies, both vehicle- and wayside-based, have changed the way railroads know themselves. The data from these technologies informs operations, planning and maintenance in increasingly sophisticated ways.
“One of the reasons we use automated measurement technology is to comply with mandated regulatory inspections,” said Gary Wolf, President of Wolf Railway Consulting and moderator of a panel discussion on automated inspection systems at the Wheel/Rail Interaction 2014 Conference.
Another reason is to ensure compliance with other requirements relating to track geometry, gauge restraint, and rail wear and profile measurement. Automated laser-based measurement systems may not be mandated for these measurements, but they are efficient and cost-effective. Most importantly, they remove the subjectivity and improve the reliability of inspections.
Automated technology is often used to identify potentially catastrophic failures – the classic hotbox, high/wide load and dragging equipment detectors, for example, Wolf said. “We also use these technologies to lower our asset lifecycle costs by moving from reactive to preventive maintenance strategies.”
Some onboard measurement systems on board track geometry cars, for example, have been in use for nearly 40 years now. They are being joined by new and developing technologies such as systems that measure track deflection, vision systems that assess tie and rail surface conditions, eddy current systems to measure crack depth in rails. Other developments such as rail neutral temperature monitoring are just around the corner.
Among the oldest automated measurement technologies are Wheel Impact Load Detector (WILD) systems, which record a dynamic wheel load as the wheel passes the system. CN, for example operates a networked system of 39 WILDs, measuring about 180 million wheels per year, said Bill Blevins, Chief Mechanical and Electrical engineer at Canadian National.
WILD data allows railroads and car owners to see trends as they develop. They can see a wheel progressively get worse before it hits the 90-kip threshold adopted by the industry. “This allows the [railroad] industry to be more proactive, and to save money and manpower by scheduling repairs more effectively than in the past,” said Ryan McWilliams, Chief Technical Officer of International Engineering.
Gary Wolf emphasized that point: “There’s a lot that we can do with automated systems that you just can’t do with manual inspections. With that amount of data, we can move from reaction to proaction; we can do accurate trending and systemic analysis.”
At Norfolk Southern, data from a wide range of sources including track geometry and rail profile, joint bars, ties and fasteners, rail seat and tie plates, ground-penetrating radar, all feed into trending and analysis programs, said Sean Woody, Manager of Track Inspection and Development for Norfolk Southern.
NS is also among the Class 1s using modeling tools to simulate vehicle/track interaction – to predict how various vehicles will perform, based on the actual track geometry. “We also try to predict performance by comparing actual rail profiles against a database of wheelsets,” Woody said.
NS is also beginning to tap its data for forecasting performance metrics, such as the expected life of a given rail in a given location, or the performance of one manufacturer’s rail compared to another, he said.
These kinds of analysis are extremely valuable. But they’re labor intensive, and simply can’t keep pace with the ever increasing flow of data.
The challenge facing railroads today is that the proliferation of automated measurement technologies, which now include vision and imaging systems, has led to an explosion of data. The issue, in many cases, has moved away from how to collect data, to how to manage it.
“Whether we’re looking at tie-downs and load securement, open hopper doors, bearing end cap bolts, missing components, or defects in wheels or track components, we’re generating big data,” McWilliams said.
It’s not uncommon for a modern track geometry vehicle to generate 100 Gb of data per day of testing. A railroad with 10 vehicles and just 4 cameras operating at 30 frames per second generates 350 million images per day — a figure roughly equal to the number of photos Facebook uploads per day, Ryan McWilliams said; “This, alone, is a huge data set; now add all the other technologies and measurements.”
With datasets this large, some of the first challenges that arise have to do with simply moving and storing the data; transferring 100 Gb of data from a vehicle in the field to a server thousands of miles away is a legitimate logistical concern.
Storing massive data troves has its challenges beyond just the logistical. There are potential liability issues, which can make railroads reluctant to collect data they can’t, or aren’t sure how to immediately act on, McWilliams said.
These thorny issues aside, there comes the question of what to do with the data once it’s been collected. With data from so many disparate sources and at such a high rate, the question everybody’s asking is how does that raw data get stitched together and become information that can be acted on? “We can’t have people going through page after page of charts and pictures,” Gary Wolf, said. “We need to develop algorithms that automatically review the data to get the most out of it.”
One approach to managing data overload, according to Sean Woody, is a more targeted approach to data collection in the first place. “Just because you can measure it, doesn’t mean you should,” he said.
But for now, it looks like big data will only get bigger. Data collection is only step one in a process. Data management and the extraction of actionable information is the next. And it’s a big step. “Data and decision management are the biggest things we’re going to face over the next couple decades,” said Matthew Dick, Director – Business Development at ENSCO.
Better data management can lead to game-changing efficiency gains, but the current lack of adequate data management tools doesn’t allow for the full exploitation of that data. For example, it’s currently not uncommon to stop testing once a track geometry car has found “too many” defects. It’s also not uncommon to adjust measurement thresholds based on what is manageable, rather than on what is optimal. “We’re not encouraged to measure more, we’re encouraged to measure less,” Dick said.
ENSCO explored one aspect of data management using VTI (Vehicle/Track Interaction) data as their starting point. The ENSCO program looked at past derailment site data and discovered a particular pattern of different low level exceptions prior to the derailment event.
The program team wrote an algorithm that searched a large database for similar patterns. During the test deployment, the algorithm “predicted” a derailment at a specific site, weeks before it actually occurred. The algorithm identified potential derailment sites that otherwise might have slipped through the cracks. “There was no added hardware, just software using data that was already there,” Dick said.
This is an example of the kind of impact data management tools and techniques can have in the era of big data. It’s also an example of the kind of potential liability concerns Ryan McWilliams mentioned and showcases the importance of proper data management.
CN has also had success in extracting new information from existing hardware and data. Since WILDs inherently measure impact loads, or “weight,” CN was able to use the data to measure car overload and imbalance. “We can find overloaded trucks and overloaded cars, side-to-side or end-to-end imbalance, and stray loads marked as empty,” Bill Blevins said.
In one case, CN noticed anomalous WILD data associated with a side-to-side imbalance derailment in a particular curve. It turned out to be the third such side-to-side derailment at the same location in the same car type from the same customer. The customer, it turned out, was consistently loading containers unevenly, Blevins said.
After noticing the pattern in the WILD data, and the derailment potential it represented, CN implemented an automated detection system based on new overload/imbalance parameters. Existing data yielded new information.
One of the great challenges in working with automated measurement technologies and the big data they create, is pooling data from disparate sources and translating it into discrete actions. That requires the ability to sift through huge databases containing multiple types of data. As Bill Blevins’ example illustrates, there are correlations and patterns in datasets that may span multiple operational “silos.” The answer, and an important part of the way forward in data management and analysis, according to Ryan McWilliams, is to address vehicle and track needs as a single system. “We need to be able to take action in one department even if it benefits another department,” he said.
Automated inspection technologies have revolutionized the way railroads measure and understand their assets. There’s no question that the measurement toolkit will continue to grow, and with it, big data will get bigger. “We’ve made some significant advancements over the past 15 years,” Gary Wolf said, “but there is a long way to go.” According to Wolf, two things have to happen in order to increase deployment of emerging inspection technologies and data management techniques: investment – ideally in the form of grants and other research funding; and regulatory relief from manual inspection requirements.
“As an industry we have to support each other and adopt these technologies,” Wolf said. “They’ll make our lives better in the long run.”
Jeff Tuzik is Managing Editor of Interface Journal