DS4-4-1000I don't see anybody putting any flood height transmitters on the roads around here anytime soon.
Did I say anything as stupid as that?
The National Weather Service has some fairly good ideas about predicting flood heights in regions with accurate GIS data. Yes, there might be some simple transponders in key regions, but we're not talking a dense network of devices that would only actuate something like .0002% of the time.
On the other hand, very accurate historical data for flood extent is preserved in a number of places, and it does not take rocket science (or lots of expensive computer time) to figure out the correlation between weather and subsequent flood extent. A child could figure out how to do this to an extent that performs the desired action: keeping autonomous vehicles away from expected risk. Perhaps the system 'cries wolf' sometimes -- the weather people around Memphis are notorious for predicting sky-is-falling storm risk which never quite gets to more than the occasional downpour -- but if the purpose of the network is to turn autonomous vehicles around so they don't drown, it does what is required.
OvermodIt is also likely that the system will be primed to receive and parse weather alerts, which may in the future include realtime flash-flooding water heights or GIS referencing and data fusion from 'smart structures'. That might help prevent a proper autonomous vehicle from coming to a potentially-flooded place (or a dangerously-compromised bridge) in the first place
I don't see anybody putting any flood height transmitters on the roads around here anytime soon. It takes years to get any damaged bridges or washed out culverts replaced, and you expect the government bodies to just divert already scarce funds to install something that will require even more maintenance.
Semper VaporoA component of the self-steering is to "see" the road ahead and the edges in some manner for some distance (with allowance for loss of edges, etc. for specific distances at specific places, such as at intersections, etc.) so a road covered with water should be detected and avoided somehow.
A number of the more 'modern' systems are actually arranged so their machine vision detects water, or the distortions of physical features over time typical of rippling or motion of water, and references this against some of the 'database' features associated with the GPS location. So, in a very real sense, the logic will recognize the presence of flooding at a particular point, make some conclusion about likely flooding effects 'further along the intended route', and almost certainly (litigation being at the state it is!) performing a reversal and reroute; that's the moral equivalent of 'turn around, don't drown' in context.
It is also likely that the system will be primed to receive and parse weather alerts, which may in the future include realtime flash-flooding water heights or GIS referencing and data fusion from 'smart structures'. That might help prevent a proper autonomous vehicle from coming to a potentially-flooded place (or a dangerously-compromised bridge) in the first place.
There is considerably more to autonomous vehicle operation than 'self-steering vehicles' a la Martin Marietta, just as there is much more to a modern smartphone touch interface than high sensor resolution and many levels of touch sensitivity...
erikem In addition to dealing with streetcars and RR crossings, I'd wonder if the the autopilot knows anything about "Turn around, don't drown" with respect to flooded roads.
In addition to dealing with streetcars and RR crossings, I'd wonder if the the autopilot knows anything about "Turn around, don't drown" with respect to flooded roads.
A component of the self-steering is to "see" the road ahead and the edges in some manner for some distance (with allowance for loss of edges, etc. for specific distances at specific places, such as at intersections, etc.) so a road covered with water should be detected and avioded somehow. Of course that doesn't take into account a short bridge span missing.
Semper Vaporo
Pkgs.
Tree's right on the mark. There will almost certainly always be situations that require the driver to take control, and probably suddenly. Such is the paradox of Children of the Magenta: the more complex the automation, the more trained the user must be to correct things when they go wrong.
Designers would do well to read the BEA report on the crash of AF 447. The combination of the startle factor, overreliance on technology to the detritement of basic airmanship and lack of procedures as to what to do when things go wrong led to the pilots stalling and crashing a fully functional aircraft.
What if the car suddenly gives back control when an icy skid becomes outside parameters?
Strikes me this "self-driving car" thing is a high-tech way to bring us back to the horse and buggy days, you know, when a driver had a horse to help him with his thinking?
Fall asleep in the driver seat? That's OK, Dobbin knows the way home!
Another danger with self-driving vehicles is that drivers will forget how to drive (although one could argue that some never knew). Unless such vehicles have 100% functionality, there will be times when the occupants may have to actually control the vehicle.
Back in the 60's, I recall reading an editorial about a similar concept. Speed would be controlled, vehicle spacing could be controlled, in short, the vast majority of mistakes that drivers make which result in collisions, etc, could be controlled and avoided. Included in the piece was reference to "killer trees," at the time a hot topic, referring to trees close enough to the highway to present a hazard to someone going off the road.
Then, the editorial concluded, some dummy would still roll his vehicle and kill himself anyhow...
I would argue that local governments would fight external control of speed (ie, enforcing speed limits) due to a loss of revenue.
On the other hand, insurance companies would love the opportunity to monitor your driving habits, as they would then have justification for raising rates.
And where such enforcement applies to controlled access highways - those roads were designed for 70+ (except in urban areas). Anything less is just a way to gain revenue.
Larry Resident Microferroequinologist (at least at my house) Everyone goes home; Safety begins with you My Opinion. Standard Disclaimers Apply. No Expiration Date Come ride the rails with me! There's one thing about humility - the moment you think you've got it, you've lost it...
Completely self driving cars are still a few years off although cars that steer themselves down the highway are already here (check out the Mercedes class E).
EuclidOther than the fashion statement of have a car that drives you around automatically, what exactly is the point of idea? Is it to remove the fallibility of a human driver, and thus make driving safer? ... This self-driving car development seems to have leapfrogged past my prediction and taken it to its ultimate level.
A good introduction to the possibilities and 'science' of autonomous vehicles is Brad Templeton's blog (and site). Before that, there are some pretty good 'imagineering' discussions as far back as the GM automated-highway proposals of the Forties (and some before that, but not with evolved hardware to make them practicable).
"Safety" -- especially safety as misdefined as 'rigid adherence to motor-vehicle statute requirements at all times' -- is only one of the things that autonomous vehicles can be programmed to achieve. One very significant advantage is continuously-optimized route guidance, without the foreground-attention problems that come from human drivers having to cope with crApple-style GPS navigation interfaces. Another is integration between physical route requirements and vehicle power requirements -- particularly when doing hybrid power management or making 'best use' of BEV-style external charging. Yet another is tolerance for human failure (which may be as simple as sudden dazzle, as 'preventable' as driving intoxicated, or as catastrophic as sudden stroke or heart attack) with at least the tacit ability to transport passengers safely to what may be an emergent destination rather than just 'going to the curb and stopping when safe' and flashing an alert signal to 'the appropriate authorities'.
I thought the 'enforcement' of safe driving behavior was going to come from a radically different direction: the mandatory installation of driver monitoring modules as a precondition of insurance coverage. (That would be followed up by nanny-state enforcement of the modules as 'proof of required insurance coverage', probably with some ability of the police to wirelessly interrogate all passing vehicles and shut down/tow/confiscate "violators" and arrest the nominally responsible parties in finest Goliath tradition). I thought this would actually lead to greater 'embracing' of autonomous vehicle control, as it would produce at least 'plausible denial' that any observed nonlegal operating practices were 'offenses' under motor-vehicle statutes.
One problem that Templeton apparently took up in detail is what happens when savvy violators 'game' the situation where multiple autonomous vehicles meekly yield the right-of-way to more aggressive vehicles. My conclusion (which was not his) was that pervasive dashcam video could, and should, and would be used to enable any driver or passenger to quickly rat out any violations, and report them reliably (and with at least a good leg up on chain-of-evidence integrity) in precisely a way that can optimize targeted enforcement -- quick, or delayed. But it might be argued that setting up such a network of dashcams has nothing nominal to do with autonomous operation as a contributor to overall safety, and might be a higher priority to implement (even though its communications infrastructure and many of its instantiations make autonomous operation much more effective and safe).
Self-driving cars are dramatically over-promised. It seems like politically correct, utopianism. To jump right into this self-driving dream relying solely on computer technology is naïve. Here is what I don’t quite understand: Other than the fashion statement of have a car that drives you around automatically, what exactly is the point of idea? Is it to remove the fallibility of a human driver, and thus make driving safer?
A few years ago, I wrote about my prediction for the future of driving where cars have a layer of automatic, central control that enforces the laws as you drive. I thought it would develop as a natural extension of the current proposals to tax drivers by the miles driven, which, in effect, turns all roads into toll roads. Instead of getting people out of their cars and into mass transit, it would turn people’s cars into a form of mass transit. This self-driving car development seems to have leapfrogged past my prediction and taken it to its ultimate level.
The early days of railroads saw all kinds of disasters, including bridge collapses, head-on collisions, derailments for no known cause, etc. The problems were overcome, and today railroads are one of the safest modes of transportation in the world.
The early days of aviation saw headline grabbing disasters while little old ladies and men in tennis shoes proclaimed, "If god wanted man to fly, he would have given him wings." Today’s passenger statistics show that most people believe air travel is a safe bet.
Someday driverless cars will reach the same technological excellence that we see in rail and air travel. You can take it to the bank, although many of us won't live long enough to see it.
Rio Grande Valley, CFI,CFII
Given the problems humans have had following their GPS onto railroad tracks and the like, I wouldn't put much faith in a self-piloting car to avoid such an unexpected obstacle as a flooded road, particularly because it doesn't have a vertical component as would another vehicle, etc.
If nothing else, perhaps the recent incidents will point up shortcomings in the software or hardware and make way for improvement. Unfortunately, glitches in that type of software have more consequences than they would in your home computer...
USA Today had an article that Tesla has had 3 accidents within 2 weeks of self-driving cars having accidents while in automatic mode.
Jeff
CandOforprogress2 https://www.youtube.com/watch?v=Fu-zJbWK4B4 Now self driving cars and RR crossings and streetcars dont mix
https://www.youtube.com/watch?v=Fu-zJbWK4B4
Now self driving cars and RR crossings and streetcars dont mix
I just saw a Wall Street analyst on CNBC talking about this. He was ripping Elon Musk a new one, and stating how Musk's ego has got in the way in developing the self-driving feature of the Tesla.
Our community is FREE to join. To participate you must either login or register for an account.