You have no idea how true that is! Automation does not understand and has no judgment.
BaltACD Believe corporate doubletalk at your own risk.
Believe corporate doubletalk at your own risk.
JohnMannYou have no idea how true that is! Automation does not understand and has no judgment.
Cheap 'automation' built to a price... with nothing but typical FSM algorithms and system complexity... yes, I agree with you. But if you look at autonomous-vehicle research since the understanding that the same principle that made the iPhone screen practical can be applied to massive sensor fusion, you will rapidly understand that haptic comprehension and effective safe judgment of actions have become easy to model and increasingly practical to implement. They just won't be simple-minded and deterministic.
For example, there was a program at Phantom Works in St. Louis using some of the logic derived from battlefield management systems like the extended version of Ida (which was 'born' as a detailing system for the Navy) which could track the progress of an air combat environment and determine when control inputs were dangerous (to the airframe, projected track, tactical situation, etc.) and take over the fly-by-wire system to produce alternative control outputs accomplishing 'pilot intent' by what might be highly nonstandard effectuation. You can probably deduce some of the necessary subsystems to accomplish this effectively, with combat-pilot input, in a secure and gracefully-degradable context. That is little different from what has become cost-effective to implement in 'production' autonomous vehicles.
What many people seem to forget is that autonomous vehicles will be smart and moral enough not to go into known bad weather conditions, or accident situations, or continue operating when being 'abused' by others in traffic, etc. They will do the equivalent of put on the flashers, go to the right, and if necessary navigate to safe parking or find alternate routes.
I have been in situations where all the prediction in the world might not help 'in time' -- for example, I was once coming south on I-65 just past Fort Knox at night, and had the bottom fall out of a cloudburst on me in the middle lane at 65mph with no advance warning. Instantly there was zero effective vision and substantial undrained water on the pavement; the best I could do was to put on flashers, maintain some reasonable trajectory where others wouldn't hit me, and try to get to the right. A proper autonomous vehicle would have weather-radar tracking in known poor-weather conditions, individual-wheel antilock braking, and sensors capable of resolving other traffic -- if necessary, by radio -- to avoid as much collision risk as possible; it would also 'know' the vehicle response characteristics to make the response essentially as good as 'humanly possible'. And note that nothing more than that could be expected of an automated system. We won't be able to prevent all accidents -- but we can avoid the usual ones, at least mitigate the unavoidable ones, and above all prioritize life in the responses. That is little different from what a human driver would do... if they had very fast reflexes, the ability to anticipate their situation or surroundings, be aware of and combine many information sources, revise schedule and assess JIT forward tracking and communication, etc.
Keep in mind that the system we designed for Conrail in 1987 (in the wake of the Chase wreck) included 'human-derived' power modulation and braking response to set up a given freight consist and bring it to a least-distance stop. In those days we had to hard-code a set of alternatives; nowadays we can accomplish much of what's needed with a simple set of running tests at and just out of the originating terminal... as well as having the hard-coded genetically-optimized rules and algorithms for anticipated train behavior.
Overmod-- for example, I was once coming south on I-65 just past Fort Knox at night, and had the bottom fall out of a cloudburst on me in the middle lane at 65mph with no advance warning. Instantly there was zero effective vision and substantial undrained water on the pavement; the best I could do was to put on flashers, maintain some reasonable trajectory where others wouldn't hit me, and try to get to the right
Had essentially the same thing happen to me on I-44 at St. Louis once. At night. I was in the far left lane (of three). I didn't dare speed up, slow down, or change lanes even, as I couldn't tell if anyone else was around me, also flying blind. Even the lane markings were essentially invisible. It was only the occasional glimpse of the "Jersey barriers" to my left that kept me on course.
As I mentioned before, every trip is different. We regularly deal with leaves on the rails (just as bad as grease) and I once had one heck of a time with nothing more than the morning dew on the rails. Just one more parameter to consider when trying to write a program that will handle potential situations.
Larry Resident Microferroequinologist (at least at my house) Everyone goes home; Safety begins with you My Opinion. Standard Disclaimers Apply. No Expiration Date Come ride the rails with me! There's one thing about humility - the moment you think you've got it, you've lost it...
tree68Just one more parameter to consider when trying to write a program that will handle potential situations.
I think I was successful in getting the R10 critical-systems group at ITU to understand the importance of tracking ad hoc situational awareness as well as just having the system monitor all the outputs from sensors and so forth. We had the critical advantage at that point of understanding the TMI post-mortems (and some of the Chernobyl machinations) and understood that a critical-response setup might in fact not at all mirror a control-room environment, particularly with respect to responses with complex interactions and intended consequences.
I'd like to think that we have a better or at least wiser set of systems programmers as the folks on NAJPTC who hard-coded a zero train length into the software at one point... considerably worse than not flexibly accommodating stopping short of formal control points. A problem is that the "Internet paradigm" for testing seems to be increasingly adopted, particularly by 'outsourced' bazaar-programming teams: the idea of going live bugs and all; your users will identify the bugs for you and notify you of what needs to be patched. It is my diplomatically-phrased opinion that this should never, ever be applied to critical systems, except as part of continuous optimization processes.
I had this discussion "again" with someone in Britain recently regarding lightweight rail vehicles and SPADs (not the airplanes but the near-unthinkable formal safety violations). It was and is my opinion that the brakes on these things need careful antilock operation, but also careful pre-application of KNOWN and UNKNOWN slip conditions in every modulated stop, applied and then monitored 'early enough' that time between stops can be minimized while absolutely ensuring no platform 'overrun'. Note that this implies some very sophisticated "autonomous" control in a system explicitly under 'driver' control at all times; one which should inspire no complacency that 'the system will handle it' without periodic 'stick time' as in some aircraft operations to maintain flying proficiency. Personally I think it's fun to design these things. Programmers who may be on a for-hire or consulting gig... perhaps not so much.
Our community is FREE to join. To participate you must either login or register for an account.