Jump to content
 

The non-railway and non-modelling social zone. Please ensure forum rules are adhered to in this area too!

Self-driving cars?


EddieB
 Share

Recommended Posts

The psychology of overtaking is interesting. Its as if we have a very solid belief in our right, our duty even, to overtake a vehicle that isn't able to go as fast as the one we are in charge of.

 

Actually, we have only a very tightly circumscribed right, certainly not a duty, and looked at objectively, we seldom have an imperative reason either.

 

What code would we write for this auto-drive car? Would it be code that required it to overtake whenever safe to do so, as soon as possible, presumably the "prime directive" being "reach destination as fast as safely possible"; or, would it be code that didn't even prompt the "overtaking subroutine" unless there was an imperative to do so, and if the latter, what imperatives would we permit/include? 

 

(For these purposes, I'm not counting "passing slower vehicle in a parallel lane" on a motorway-like road as "overtaking", I'm thinking mainly of one-lane-in-each-direction roads.)

 

 

Edited by Nearholmer
  • Interesting/Thought-provoking 1
Link to post
Share on other sites

I frequently drive at around 50 mph on motorways  [for many vehicles, that is their speed limit!]....

 

I am governed more by revs than anything else.

But, even at 50 I find I'm overtaking stuff.

Best way to overtake that lorry doing 50, is to wait until the motorway curves to the right....then you'll be 'inside' the lorry, covering less distance at 50.....

I rarely start passing stuff on left hand curves, as it takes longer.

  • Informative/Useful 2
Link to post
Share on other sites

1 hour ago, Nearholmer said:

I reckon it’s a really complex problem for any robot to ‘see’ cyclists on a road, and likewise to ‘see’ pedestrians, stray dogs, etc etc.

 

As human beings, we are simultaneously processing a vast array of different ‘input’, and factoring-in experience. It’s not a case of simple shape recognition, we are processing colour, movement, etc etc, plus ‘environmental probability’ (is this a place where one might expect kids to play?), precursor signals (ice cream van, ball rolling across road).


At our best, we are stunningly good at this; at our worst we are rubbish.

 

All that is likely to be impossible to achieve with conventional programming. Which is why much of the current research into autonomous robotic systems is based around self learning Artificial Intelligences. These AI's are configured to learn a task much in the manner that a human does.

  • Agree 1
Link to post
Share on other sites

56 minutes ago, alastairq said:

 Splendid that you had the confidence to abort an overtake.

 

It always amazes me how other drivers will vilify [criticise?] a drive for commencing an overtake, then aborting and pulling back in?

Yet, understanding the need to abort an action is probably more important than having the skill and guesstimation to make the overtake in the first place.

 

 

It was not quite like that, go/no go was decided before I needed to pull out, so the abort was just a case of braking.  I would not pull out until I was sure I could complete the overtake safely.

Link to post
Share on other sites

1 hour ago, alastairq said:

I frequently drive at around 50 mph on motorways  [for many vehicles, that is their speed limit!]....

 

I am governed more by revs than anything else.

But, even at 50 I find I'm overtaking stuff.

Best way to overtake that lorry doing 50, is to wait until the motorway curves to the right....then you'll be 'inside' the lorry, covering less distance at 50.....

I rarely start passing stuff on left hand curves, as it takes longer.

 

Sorry, answered wrong thread.

 

Brit15

Edited by APOLLO
Link to post
Share on other sites

3 hours ago, woodyfox said:

Because the motorist knocks me off due to carelessness or recklessness then i should desist??? 

Your decision as to what level of risk you're willing to run but if an incompetent motorist knocks you off your bike it will be little comfort to know that you had right of way even if you can prove that in a court of law.

 

If the same motorist collides with you in a car, at least you have a better chance wearing a seat belt with some protection from a crumple zone or air bag. You'd be safer still in a train, but of course you still have to get to and from the station somehow.

Link to post
Share on other sites

25 minutes ago, Joseph_Pestell said:

Whose insurance gets clobbered if the self-drive car causes an accident?

 

Bill Gates / Elon Musk 50 / 50 !!!!!!!!!!!!!!

 

Brit 15

  • Funny 1
Link to post
Share on other sites

22 minutes ago, Titan said:

I would not pull out until I was sure I could complete the overtake safely.

 No problem with popping out to get a better view [if safe to do so?]

If you don't like the view, pop back in?

Whether an abort involves braking, or moving back in or whatever, the overtake is aborted once you have made up your mind, so to speak. Not just when you've started crossing the centre  line.

So in your instance, the 'abort' occurred once a 'no go' was realised...

Overtaking involves planning. [Something which a lot of drivers don't do until the very last split second?]...

So if the plan is a no go, it is 'aborted' for now.

Link to post
Share on other sites

54 minutes ago, Joseph_Pestell said:

Whose insurance gets clobbered if the self-drive car causes an accident?

 

I'd agree with 30801, the car owner's inusurer would pay for the loss, but I would also expect them to pursue the manufacturer to recover their costs if they had good evidence of a manufacturing or design defect, which is what "unfit" software would be.

 

Proving that the software was "unfit" would be an interesting exercise though, especially if, as I expect it would have been, it was certified safe to a defined level by an independent safety assessment body.

 

What would happen to your NCB and next premium is another question.

 

We had a slightly similar case, where a fire was caused in our kitchen, causing thousands of pounds-worth of damage, due to a faulty gas fitting. The insurer paid-up without quibble (I had to provide estimates and receipts, but that was reasonable), but they then pursued the builders of the house (it was c8yo at the time), and proved that they had received, and failed to react to, a product defect advisory notice that had been issued by the appliance-maker shortly after the house was built. 

 

To cut a long story slightly shorter, our insurer did not raise our premiums, and undertook not to have the claim "count against us" if we sought quotes from other insurers.

Link to post
Share on other sites

1 hour ago, Nearholmer said:

I reckon it’s a really complex problem for any robot to ‘see’ cyclists on a road, and likewise to ‘see’ pedestrians, stray dogs, etc etc.

 

As human beings, we are simultaneously processing a vast array of different ‘input’, and factoring-in experience. It’s not a case of simple shape recognition, we are processing colour, movement, etc etc, plus ‘environmental probability’ (is this a place where one might expect kids to play?), precursor signals (ice cream van, ball rolling across road).

 

That's where Artificial Intelligence comes into its own.  Just like a human, such technology has to learn what combinations of visual or other clues can indicate a likely outcome.  Yes, there is a colossal learning curve.  A human has a learning curve too, and the reason we don't allow a five year old to drive is that he hasn't learned enough yet.  But once machines have learned, their experience can be passed from machine to machine, rather than each individual having to make the same mistakes as its elders and betters have already made.  And the machine won't doze off at the wheel after a long day at work.  

 

As you say however recognition of cyclists, pedestrians and stray dogs is a major technical challenge, but they do seem to be getting there.  

 

Long term the machines must win, because humans will always be subject to human error.  Autonomous vehicles will become safer than us at the wheel once their statistical error rate is less than ours.   That's not to say that accidents will finish completely - a regulatory body will still need to investigate such accidents as do occur and take remedial action.  As I recall the early OD crossings had to be manned during their first winter because the Lidar arrangement didn't cope adequately with falling snow - but the problem has been fixed.

 

Machines rarely make mistakes - the computer bugs that everybody whinges about are down to human design errors and oversights when we are programming the machines. 

  • Agree 1
Link to post
Share on other sites

  • RMweb Premium
19 minutes ago, Michael Hodgson said:

Machines rarely make mistakes - the computer bugs that everybody whinges about are down to human design errors and oversights when we are programming the machines. 

Not so sure about that. Historically machines have been confined to fairly straightforward, simple tasks where they operate in a very different way to people. Moving on to this sort of application I'd put my money on them making different mistakes. Overall fewer ones, maybe (relatively easy to be better than the worst humans who cause most of the problems), but I think the sort of perception and understanding of a situation that a human being is capable of is still in the science fiction regime when it comes to machines. They can do a lot without requiring that level of comprehension, but there will be cases of them doing things wrong that would be obvious to a person.

Link to post
Share on other sites

59 minutes ago, Michael Hodgson said:

Autonomous vehicles will become safer than us at the wheel once their statistical error rate is less than ours. 

 

Indeed.

 

I wonder, but don't wish to be on the receiving-end, what new or different mistakes they will make, in that they are very likely to form a different error pattern from us, because the errors we tend to make are a function of our evolution, just as much as the things we tend to be good at.

 

[Ah, just realised that Reorte has made the very same point]

 

They might also be able to make use of senses that we don't have - one that occurs to me is an ability to detect the proximity and direction of a mobile 'phone, which wouldn't be a good indicator in isolation of the presence of a person, but it might be something that could be factored-in to their thinking.

Edited by Nearholmer
Link to post
Share on other sites

Where I live (classed as a small town), I frequently drive (less than 10 miles) to the next (market town to do shopping. The route from home, after about 1/2 mile, is a single cariageway laid on an old railway trackbed, so straight, and basically flat (it is Fenland). The road at the end of the trackbed curves left to a roundabout which leads into the next town - actually a fair distance in from the roundabout. However, about 3/4 of the way along the trackbed., there is now a 50mph speed limit (the site of the old village station), with traffic lights at a junction. Otherwise the main road has a national speed limit of 60mph applied.

Now, my point is this. Let's take the return journey home. I leave the town centre area, 30mph speed limit almost up to the main road roundabout (and yes, I am one of the few that respects speed limits). Often a few frustrated cars behind me!. I join the main road, which has many artics on it (56mph) and a lot of farm vehicles also (now 40mph I believe, but usually much slower). If I'm not stuck in a queue, its 60mph for 1/2 mile max then 50 for the next mile (old station area) - always there are frustrated drivers behind, often trying to overtake around the centre bollards!. Out of the 50 into 60 on the straight. This road is known by the local police as murder mile due to the number of fatalities btw. It is rare to be able to overtake between the 50mmph end and my home roundabout, just too much traffic the other way, but sometimes large gaps  make it possible. BUT - apart from slow tractors, why would I need to overtake? An artic at 56mph? So if I do, he catches me up at the next tractor I get behind, or the next roundabout. The point is - as this was drummed in to me when I took my Police driving test (as a civvy, for my job with them) - IS THE OVERTAKE REALLY NECESSARY?  

How will a driverless car cope with exactly that situation I wonder. I notice every time I do that journey, those that overtake actually gain NOTHING by doing it. And this is common in many places, not just this one example.

 

Stewart

Edited by stewartingram
Link to post
Share on other sites

9 minutes ago, stewartingram said:

Where I live (classed as a small town), I frequently drive (less than 10 miles) to the next (market town to do shopping. The route from home, after about 1/2 mile, is a single crriageway laid on an old railway trackbed, so straight, and basically flat (it is Fenland). The road at the end of the trackbed curves left to a roundabout which leads into the next town - actually a fair distance in from the roundabout. However, about 3/4 of the way along the trackbed., there is now a 50mph speed limit (the site of the old village station), with traffic lights at a junction. Otherwise the main road has a national speed limit of 60mph applied.

Now, my point is this. Let's take the return journey home. I leave the town centre area, 30mph speed limit almost up to the main road roundabout (and yes, I am one of the few that respects speed limits). Often a few frustrated cars behind me!. I join the main road, which has many artics on it (56mph) and a lot of farm vehicles also (now 40mph I believe, but usually much slower). If I'm not stuck in a queue, its 60mph for 1/2 mile max then 50 for the next mile (old station area) - always there are frustrated drivers behind, often trying to overtake around the centre bollards!. Out of the 50 into 60 on the straight. This road is known by the local police as murder mile due to the number of fatalities btw. It is rare to be able to overtake between the 50mmph end and my home roundabout, just too much traffic the other way, but sometimes large gaps  make it possible. BUT - apart from slow tractors, why would I need to overtake? An artic at 56mph? So if I do, he catches me up at the next tractor I get behind, or the next roundabout. The point is - as this was drummed in to me when I took my Police driving test (as a civvy, for my job with them) - IS THE OVERTAKE REALLY NECESSARY?  

How will a driverless car cope with exactly that situation I wonder. I notice every time I do that journey, those that overtake actually gain NOTHING by doing it. And this is common in many places, not just this one example.

 

Stewart

That’s exactly what I try to do. Sometimes I’m impetuous so overtake anyway, most of the time I’m reading the road (and traffic queue) ahead. If I can see lots of other traffic ahead then why burn the energy to overtake. Sat tucked in I’m in the slipstream and saving energy. I’m also saving my mental energy in not getting frustrated.

 

im currently visiting the A47 from P/Boro to Norwich weekly for work (we will be dualling it). No point overtaking on the single c/way stretches most of the time as it doesn’t get you there any quicker.

 

Too many don’t do the same and get angry and do daft things. I enjoy zipping past them once we hit a dual section having caught them back up.

Edited by black and decker boy
Link to post
Share on other sites

Back to the algorithm question I asked earlier.

 

Maybe the robot will have the ability to access good, real time traffic data, as high res. If so, it will be able to make judgements about the time-saving possible by an overtaking move.

 

If that is possible, the number of ‘overtakes’ will reduce markedly, because a very high proportion accrue no or minuscule time saving so far as I can see.

Link to post
Share on other sites

6 hours ago, Nearholmer said:

In the context of this discussion, the issue is whether auto-driven cars can (a) detect cyclists, and (b) follow the Highway Code. I’d be optimistic that on both points they’d do better than people taken as a whole, especially on the latter point.

On of the early AI failures of an autonomous test vehicle (I believe it was Google's) was a cyclist. The car detected the cyclist and stopped. As I understand it, the cyclist was balancing at a standstill rocking backward and forward. The AI software got confused and shut the car down. This sort of interaction is exactly the thing that such testing is intended to find.

 

I missed this post from earlier:

4 hours ago, 30801 said:

There was an article about a Google car (I think) being confused by a stationary cyclist at a junction doing a track stand. The car was programmed to assume cyclists with feet off the ground must be moving.

Edited by Ozexpatriate
Link to post
Share on other sites

4 hours ago, Nearholmer said:

As human beings, we are simultaneously processing a vast array of different ‘input’, and factoring-in experience. It’s not a case of simple shape recognition, we are processing colour, movement, etc etc, plus ‘environmental probability’ (is this a place where one might expect kids to play?), precursor signals (ice cream van, ball rolling across road).

Sensor fusion is the part of AI for autonomy where the most work is going on. Everything you mention is factored in. With the exception of signals and signage, colour is not very important.

 

Edited by Ozexpatriate
Link to post
Share on other sites

20 minutes ago, Nearholmer said:
20 minutes ago, Nearholmer said:

Maybe the robot will have the ability to access good, real time traffic data, as high res. If so, it will be able to make judgements about the time-saving possible by an overtaking move.

 

 

If it has access to good real time traffic data it will be able to save far more time by choosing an alternative route that isn't clogged by traffic, road works or accidents than it will by being a bit nippy with tight overtaking margins.  The difficulty at present of course is quality of data - such as the accuracy of the traffic reports, and the difficulty in predicting what the situation will be like by the time you reach any of the choke points. 

 

My satnav purports to give me optimal routes and does react to published road works.  However planned road closures or reopenings don't always happen when they're supposed to, and its internal road map doesn't get updated (unless I were to buy a very expensive data upgrade).  It gives me completely wrong information near Kettering because a roundabout got moved.  And when I'm driving in the vicinity of the A14 near Cambridge for example, it thinks I'm driving across a ploughed field!  There's even a one-way street in Letchworth which is now the other way since the map was created.

Link to post
Share on other sites

45 minutes ago, Nearholmer said:

Maybe the robot will have the ability to access good, real time traffic data, as high res. If so, it will be able to make judgements about the time-saving possible by an overtaking move.

It is far easier for two autonomous vehicles to negotiate space on the road than the interaction between an autonomous vehicle with a human operated one, or two human operated vehicles - where the only telltales are turning indicators, brake lights and highly variable operator perception.

 

One of the advantages for having more autonomous vehicles in traffic is that they can maintain smaller separations at higher speeds by communicating intent to each other. Human drivers will remain unpredictable.

 

Such arbitration is something computing systems do constantly.

1 hour ago, Nearholmer said:

They might also be able to make use of senses that we don't have - one that occurs to me is an ability to detect the proximity and direction of a mobile 'phone, which wouldn't be a good indicator in isolation of the presence of a person, but it might be something that could be factored-in to their thinking.

Yes. They will be able to wirelessly communicate with each other and arbitrate.

 

Infrared cameras have long been an option on luxury cars to detect hazards like deer. These sensors are more useful in detecting mammals (including humans) than looking for a mobile ping. They are an input to sensor fusion.

 

 

Edited by Ozexpatriate
  • Agree 1
Link to post
Share on other sites

2 minutes ago, Ozexpatriate said:

Sensor synthesis is the part of AI for autonomy where the most work is going on. Everything you mention is factored in. With the exception of signals and signage, colour is not very important.

Driving to visual signs is sub-optimal.  These vehicles need to communicate dynamically with one another - and it may well be that it proves safer for them to tailgate one another and improve headways/road capacity, whilst they obviously shouldn't be doing that with any manually driven cars that might be using the same roadway.  It might well make sense for them to receive "traffic light" aspects electronically in advance of them changing so that they can predict the best time to accelerate and decelerate.  They should be able to agree with one another a protocol which enables them to zoom cross one another's paths in the manner of a well-rehearsed motor cycle display team.  As long as the old fashioned driver is still on the roads, the technical challenge is much more difficult as they have to make allowances for his unpredictable behaviours.  And I'm sure we've all encountered the idiot who signals left and turns right! 

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...