Trains.com

First thing we do is automate all the Trains. Another view on automation.

5112 views
34 replies
1 rating 2 rating 3 rating 4 rating 5 rating
  • Member since
    January 2019
  • 4 posts
Posted by JohnMann on Tuesday, February 8, 2022 8:34 AM

You have no idea how true that is! Automation does not understand and has no judgment.

  • Member since
    January 2014
  • 8,221 posts
Posted by Euclid on Tuesday, February 8, 2022 8:59 AM

BaltACD

Believe corporate doubletalk at your own risk.

 

Don’t get me wrong.  I am not an advocate of autonomous cars, trucks, or trains.  I am only referring to the advocacy trend for them by others.  Obviously the advocates are overpromising the trend.  Promising the moon is part of marketing.  Pie in the sky not only promises the result, but it also cultivates a market of believers that will push the limit if investment in the future.  So in the future, we will colonize Mars and conduct mining on asteroids. 
 
It is autonomous private automobiles that many believe will come first, and then followed by autonomous trucks.  Both of those are claimed to be only a year or two away.  Between cars, trucks, and trains, I think trains are the most technically feasible, but it may be organizationally impossible.  Or the dream may be resoundingly rejected like ECP brakes were, once the promise of autonomous trucks slides out a few years.
 
But the current marketing hype behind autonomous trucks seems to have a powerful secondary effect on the railroads by threatening them with losing a lot of business to trucking. 
 
I expect that any of these autonomous modes will need to have a large part of their infrastructure static and built into the thoroughfare infrastructure, especially in the case of highways and motor vehicles.  Railroads already have a lot of that sort of control in their guideway infrastructure.  Not needing to be steered offers trains a huge advantage over road vehicles.  For highway autonomous driving, the system will need sensing and decision making to address everything happening with the sphere of operation, including contingencies not originating from the roadway.
 
The greatest impetus in the marketing of autonomous vehicles is the premise that the robotic reliability will far exceed that of human operators.  So the safety of autonomous operation will be far greater than with human operation.  Highway travel is a carnage of disasters caused by human error.  Even if autonomous driving cannot make the road perfectly safe, it is easy to promise to make them 99% perfectly safe.  That promise is all that is needed to materialize the dream. 
  • Member since
    September 2003
  • 21,669 posts
Posted by Overmod on Tuesday, February 8, 2022 10:03 AM

JohnMann
You have no idea how true that is!  Automation does not understand and has no judgment.

That was not even true in the late '40s when GM started its early practical research into ITS.

Cheap 'automation' built to a price... with nothing but typical FSM algorithms and system complexity... yes, I agree with you.  But if you look at autonomous-vehicle research since the understanding that the same principle that made the iPhone screen practical can be applied to massive sensor fusion, you will rapidly understand that haptic comprehension and effective safe judgment of actions have become easy to model and increasingly practical to implement.  They just won't be simple-minded and deterministic.

For example, there was a program at Phantom Works in St. Louis using some of the logic derived from battlefield management systems like the extended version of Ida (which was 'born' as a detailing system for the Navy) which could track the progress of an air combat environment and determine when control inputs were dangerous (to the airframe, projected track, tactical situation, etc.) and take over the fly-by-wire system to produce alternative control outputs accomplishing 'pilot intent' by what might be highly nonstandard effectuation.  You can probably deduce some of the necessary subsystems to accomplish this effectively, with combat-pilot input, in a secure and gracefully-degradable context.  That is little different from what has become cost-effective to implement in 'production' autonomous vehicles.

What many people seem to forget is that autonomous vehicles will be smart and moral enough not to go into known bad weather conditions, or accident situations, or continue operating when being 'abused' by others in traffic, etc.  They will do the equivalent of put on the flashers, go to the right, and if necessary navigate to safe parking or find alternate routes.

I have been in situations where all the prediction in the world might not help 'in time' -- for example, I was once coming south on I-65 just past Fort Knox at night, and had the bottom fall out of a cloudburst on me in the middle lane at 65mph with no advance warning.  Instantly there was zero effective vision and substantial undrained water on the pavement; the best I could do was to put on flashers, maintain some reasonable trajectory where others wouldn't hit me, and try to get to the right.  A proper autonomous vehicle would have weather-radar tracking in known poor-weather conditions, individual-wheel antilock braking, and sensors capable of resolving other traffic -- if necessary, by radio -- to avoid as much collision risk as possible; it would also 'know' the vehicle response characteristics to make the response essentially as good as 'humanly possible'.  And note that nothing more than that could be expected of an automated system.  We won't be able to prevent all accidents -- but we can avoid the usual ones, at least mitigate the unavoidable ones, and above all prioritize life in the responses.  That is little different from what a human driver would do... if they had very fast reflexes, the ability to anticipate their situation or surroundings, be aware of and combine many information sources, revise schedule and assess JIT forward tracking and communication, etc.

Keep in mind that the system we designed for Conrail in 1987 (in the wake of the Chase wreck) included 'human-derived' power modulation and braking response to set up a given freight consist and bring it to a least-distance stop.  In those days we had to hard-code a set of alternatives; nowadays we can accomplish much of what's needed with a simple set of running tests at and just out of the originating terminal... as well as having the hard-coded genetically-optimized rules and algorithms for anticipated train behavior.

  • Member since
    December 2001
  • From: Northern New York
  • 25,020 posts
Posted by tree68 on Tuesday, February 8, 2022 10:25 AM

Overmod
-- for example, I was once coming south on I-65 just past Fort Knox at night, and had the bottom fall out of a cloudburst on me in the middle lane at 65mph with no advance warning.  Instantly there was zero effective vision and substantial undrained water on the pavement; the best I could do was to put on flashers, maintain some reasonable trajectory where others wouldn't hit me, and try to get to the right

Had essentially the same thing happen to me on I-44 at St. Louis once. At night.  I was in the far left lane (of three).  I didn't dare speed up, slow down, or change lanes even, as I couldn't tell if anyone else was around me, also flying blind.  Even the lane markings were essentially invisible.  It was only the occasional glimpse of the "Jersey barriers" to my left that kept me on course.

As I mentioned before, every trip is different.  We regularly deal with leaves on the rails (just as bad as grease) and I once had one heck of a time with nothing more than the morning dew on the rails.  Just one more parameter to consider when trying to write a program that will handle potential situations. 

LarryWhistling
Resident Microferroequinologist (at least at my house) 
Everyone goes home; Safety begins with you
My Opinion. Standard Disclaimers Apply. No Expiration Date
Come ride the rails with me!
There's one thing about humility - the moment you think you've got it, you've lost it...

  • Member since
    September 2003
  • 21,669 posts
Posted by Overmod on Tuesday, February 8, 2022 10:51 AM

tree68
Just one more parameter to consider when trying to write a program that will handle potential situations. 

With the well-established point (in part derived from the greater-and-greater-fool theory) that it becomes increasingly hard to calculate complex interactions and have the system 'design' responses for them.

I think I was successful in getting the R10 critical-systems group at ITU to understand the importance of tracking ad hoc situational awareness as well as just having the system monitor all the outputs from sensors and so forth.  We had the critical advantage at that point of understanding the TMI post-mortems (and some of the Chernobyl machinations) and understood that a critical-response setup might in fact not at all mirror a control-room environment, particularly with respect to responses with complex interactions and intended consequences.

I'd like to think that we have a better or at least wiser set of systems programmers as the folks on NAJPTC who hard-coded a zero train length into the software at one point... considerably worse than not flexibly accommodating stopping short of formal control points.  A problem is that the "Internet paradigm" for testing seems to be increasingly adopted, particularly by 'outsourced' bazaar-programming teams: the idea of going live bugs and all; your users will identify the bugs for you and notify you of what needs to be patched.  It is my diplomatically-phrased opinion that this should never, ever be applied to critical systems, except as part of continuous optimization processes.

I had this discussion "again" with someone in Britain recently regarding lightweight rail vehicles and SPADs (not the airplanes but the near-unthinkable formal safety violations).  It was and is my opinion that the brakes on these things need careful antilock operation, but also careful pre-application of KNOWN and UNKNOWN slip conditions in every modulated stop, applied and then monitored 'early enough' that time between stops can be minimized while absolutely ensuring no platform 'overrun'.  Note that this implies some very sophisticated "autonomous" control in a system explicitly under 'driver' control at all times; one which should inspire no complacency that 'the system will handle it' without periodic 'stick time' as in some aircraft operations to maintain flying proficiency.  Personally I think it's fun to design these things.  Programmers who may be on a for-hire or consulting gig... perhaps not so much.

Join our Community!

Our community is FREE to join. To participate you must either login or register for an account.

Search the Community

Newsletter Sign-Up

By signing up you may also receive occasional reader surveys and special offers from Trains magazine.Please view our privacy policy