Jump to content
 

Urgent Safety Message from RAIB


Recommended Posts

On 01/06/2023 at 15:40, PaulRhB said:


It’s pretty difficult to remove humans, all current logic based automatic systems are programmed by humans and any future AI will have at least partially learnt from humans. There isn’t actually a way to eliminate humans from the decision making process. Any intelligent learning system has to learn all possibilities and you do that from making mistakes.

 

Removing humans isn't the most important point. Removing variability is. Many (perhaps most) accidents occur because the rules were not followed and not because the rules were wrong - computers are better at following rules than we are.

Link to post
Share on other sites

  • RMweb Gold
18 minutes ago, icn said:

Many (perhaps most) accidents occur because the rules were not followed and not because the rules were wrong -


Not following rules is definitely a major consideration but why they weren’t followed can be due to a multitude of reasons. I’ve even seen a RAIB report concede they did the right thing in not following a rule in the Esher incident in the early 2000’s because of a unique set of circumstances. Other rules were broken post the initial incident but noted the other factors that influenced the error, overall a very good report that has lead to at least one other Signaller keeping trains apart from reading it. https://assets.publishing.service.gov.uk/media/547c906640f0b602410001b3/R252006_070108_Part_1_Esher.pdf

Many of the rules are written as a result of humans finding alternative ways to do something that the original creators hadn’t considered. So not necessarily a case of rules being wrong just not technically existing. The Railway rulebook is updated every six months and inbetween there are urgent updates in the Periodical Operating Notices or even the Weekly Operating Notice. New rules can also come in due to a change in an external factor like the weather rules following the Stonehaven report. 
So the rules are constantly evolving in response to human and external factors. 

 

18 minutes ago, icn said:

computers are better at following rules than we are.


Which can be a problem if the logic isn’t adapted to all the local criteria. 
 

Computers assisting humans is usually best but as we’ve seen in some air accidents if one critical sensor is blocked in flight or inoperative it can lead to the computer fighting the pilot because it doesn’t have the correct information. Blindly following rules is not always the best course, sometimes intelligence needs to question it when other factors come in to play. That’s how I see this urgent advice notice, it’s only asking to look what’s causing the error based on new feedback. 

  • Like 3
  • Agree 2
  • Interesting/Thought-provoking 1
Link to post
Share on other sites

38 minutes ago, PaulRhB said:


Not following rules is definitely a major consideration but why they weren’t followed can be due to a multitude of reasons. I’ve even seen a RAIB report concede they did the right thing in not following a rule in the Esher incident in the early 2000’s because of a unique set of circumstances. Other rules were broken post the initial incident but noted the other factors that influenced the error, overall a very good report that has lead to at least one other Signaller keeping trains apart from reading it. https://assets.publishing.service.gov.uk/media/547c906640f0b602410001b3/R252006_070108_Part_1_Esher.pdf

 

That one is interesting, but also something that software can be programmed to take into account. This is a fairly easy to imagine scenario, and one that can be fairly easily taken into account of when developing relevant control software. The rule book likely is written as it is because humans are not computers who can predict the likely outcomes of stopping all trains vs not stopping all trains. They had to write something that a human can humanly process in case of an incident. Once you have computers with prediction, you can update the rules to increase safety in a way that isn't possible when it's humans that are processing the rules. Insisting on continuing with rules targeted towards humans is creating an artificial handicap for computer control.

 

38 minutes ago, PaulRhB said:

Many of the rules are written as a result of humans finding alternative ways to do something that the original creators hadn’t considered. So not necessarily a case of rules being wrong just not technically existing. The Railway rulebook is updated every six months and inbetween there are urgent updates in the Periodical Operating Notices or even the Weekly Operating Notice. New rules can also come in due to a change in an external factor like the weather rules following the Stonehaven report. 
So the rules are constantly evolving in response to human and external factors. 

 

Rules are certainly changing, and computers therefore can and will be updated as needs and rules change. The fact that rules are constantly changing is yet another reason for preferring computers - because you can apply the updated rules consistently and they'll  be followed consistently. A human may fall back to older training and older rules even if they have nominally been taught the new rules, the computer won't. The fact that rules change is in no way supportive of sticking to humans.

 

38 minutes ago, PaulRhB said:

Which can be a problem if the logic isn’t adapted to all the local criteria. 
 

Computers assisting humans is usually best but as we’ve seen in some air accidents if one critical sensor is blocked in flight or inoperative it can lead to the computer fighting the pilot because it doesn’t have the correct information. Blindly following rules is not always the best course, sometimes intelligence needs to question it when other factors come in to play. That’s how I see this urgent advice notice, it’s only asking to look what’s causing the error based on new feedback. 

 

The sensor example is poignant because this thread seems to be about the equivalent human issue: the human did not read the data correctly - the human's sensors (or processing of said sensors) did not work. Now the question becomes: are sensors or humans more likely to fail? I suspect it's the latter - especially because sensor failures can be mitigated as and when new failure modes are discovered. Unfortunately humans are a bit harder to fix - pyschology is far more complex than logic.

 

Neither humans nor computers will be perfect. Both will fail, but with one of them a certain failure mode can be eliminated once it's known, with the other it can't.

Edited by icn
Link to post
Share on other sites

8 minutes ago, icn said:

No one proposed using AI for trains, there is absolutely no need to use AI in trains (perhaps beyond algorithms for processing visual data). Simple logic is more than enough.

I can see uses for AI on the railways, although it might not be appropriate to driving the trains.

 

For example if you're running one of the lines on the Underground, there comes a point where disruption caused by something like a cord-pulled, one under, severe congestion at one particular station etc mean somebody in a control room has to take a decision whether to cancel all trains and tell the punters to catch a bus, modify the service pattern, turn trains back part-way etc to at least maintain a service on the rest of the line and to enable the service to resume more rapidly when the problem has cleared.  Currently done by humans based on experience and judgment, but is a task to which AI should be well suited.

  • Interesting/Thought-provoking 2
Link to post
Share on other sites

  • RMweb Premium
16 minutes ago, Michael Hodgson said:

I can see uses for AI on the railways, although it might not be appropriate to driving the trains.

 

For example if you're running one of the lines on the Underground, there comes a point where disruption caused by something like a cord-pulled, one under, severe congestion at one particular station etc mean somebody in a control room has to take a decision whether to cancel all trains and tell the punters to catch a bus, modify the service pattern, turn trains back part-way etc to at least maintain a service on the rest of the line and to enable the service to resume more rapidly when the problem has cleared.  Currently done by humans based on experience and judgment, but is a task to which AI should be well suited.

Or we could just keep doing that job with humans, rather than developing complex electronic systems to do things we can do.

 

There are some good uses cases for AI, some medical examples have been shown for example, but by and large I'm completely and utterly against using it to replace us, even if there is a marginal safety benefit.

  • Like 2
  • Interesting/Thought-provoking 1
  • Round of applause 1
Link to post
Share on other sites

  • RMweb Gold
2 minutes ago, icn said:

 

That one is interesting, but also something that software can be programmed to take into account. This is a fairly easy to imagine scenario, and one that can be fairly easily taken into account of when developing relevant control software. The rule book likely is written as it is because humans are not computers who can predict the likely outcomes of stopping all trains vs not stopping all trains. They had to write something that a human can humanly process in case of an incident. Once you have computers with prediction, you can update the rules to increase safety in a way that isn't possible when it's humans that are processing the rules.
 

 

To be honest both have flaws, the computer sensor in one aircraft case failed because the heating element failed that stopped it icing up. Computers live in the chaotic world of humans, animals and nature and they have their limits because the system supporting them, human data input, mechanical or electrical system can fail and lead to making incorrect decisions or not making one at all. 
Ultimately it’s unlikely we will remove humans from the system anytime soon as even when sensors and backups are available they aren’t always installed due to costs. 
Programming has two potential flaws, the human making an error or the parameter not being in the brief. Even a new chip may meet the original brief but you can find the old version had extra functions that had been used because they were available, we’ve also seen that with relays. The originals used a common structure with another and new manufacturing techniques meant they could make each cheaper but only to do the specific brief functions. Hence you plug it in and you may take some time to find that it now doesn’t act exactly as before. Hopefully testing identifies it as they did with our relays before going live but someone or something needs to create and execute that test log. Humans are still pretty good at thinking outside the box of what might happen compared to a computer. 

 

2 minutes ago, icn said:

 

Insisting on continuing with rules targeted towards humans is creating an artificial handicap for computer control.

 

But when the computer shuts down into safe mode who has to check before rebooting it?

Computers also rely on multiple data links for safety critical stuff and compare the data to ensure safety. Now those links require feedback in a certain time or they switch to the backup. In one instance some maintenance put in a delay so it called on the backup only to get two check responses at once and decided something was trying to interfere so it shut down into safety mode. Worked brilliantly in being safe but it required a human to figure out why the computer was getting alerts because it couldn’t identify why it got a third response. I still think working together is safer so one causes the other to check, who has the ultimate override though is difficult as seen in the Boeing crashes of a few years ago, if the human identifies a critical error in use but cannot overcome the computer in time you have a problem because the computers logic is absolute. A computer won’t go ok let’s try this as a last chance. 

 

2 minutes ago, icn said:

Rules are certainly changing, and computers therefore can and will be updated as needs and rules change. The fact that rules are constantly changing is yet another reason for preferring computers - because you can apply the updated rules consistently and they'll  be followed consistently. A human may fall back to older training and older rules even if they have nominally been taught the new rules, the computer won't. The fact that rules change is in no way supportive of sticking to humans.


Reverting to old learning because of rapidly changing rules is a definite risk and why our mantra is make it safe and then check the rulebook to make sure. I always train people to know the immediate response but then backup the ‘head copy’ with the hard copy of the rule for the detail. 

 

2 minutes ago, icn said:

The sensor example is poignant because this thread seems to be about the equivalent human issue: the human did not read the data correctly - the human's sensors (or processing of said sensors) did not work. Now the question becomes: are sensors or humans more likely to fail? I suspect it's the latter - especially because sensor failures can be mitigated as and when new failure modes are discovered. Unfortunately humans are a bit harder to fix - pyschology is far more complex than logic.


 
As to mitigating as new failure modes are discovered that’s exactly what this alert is calling for analysis of and it’s the human that’s identified the potential. It’s asking is it the data presented that’s an issue or the processing of it? 
Psychology vs logic, aren’t they just different levels of logic though? Psychology just adds some chemical logic alerts onto the pure logic and when you’re dealing with humans and animals that work off that influence sometimes you need that helping the computer to predict the next action. Simple example is a dog on the line, we might advise drivers but we don’t stop trains, introduce a worried owner though and the computer assumes they will wait until we retrieve the dog. The human knows the owner may panic and suddenly run for the dog ignoring an approaching train. 
I’ve made decisions based on the way somethings said because I can read the emotions better than a computer can at present. Distress can be screaming or it can be icy calm, I very likely saved two people in distress because I read the voice and actions not just what was actually said. 
So use the power and logic of computers but ultimately we still need the human component to predict or consider what the human or other squidgy creature is likely to do next. That line is slowly sliding towards more computer control but there’s still a healthy margin where the human is very useful in the decision. 
 

  • Round of applause 1
Link to post
Share on other sites

Without getting into the technicalities of AI and computer programming, something that the tech evangelists rarely seem to mention is the socio-economic impact of replacing a whole swathe of professionals with technology. What exactly do they intend for all of these displaced people to do for a living ? or is it a case of "we don't really care as long as our programming skills are still used"?

 

Whilst I'm sure there are cases where AI and Technology can and arguably should be used, to coin a phrase from a popular 1970s sci-fi film "don't be too proud of your technological terror".

  • Like 3
  • Agree 2
Link to post
Share on other sites

On 29/05/2023 at 21:23, Michael Hodgson said:

Flashing green was an example of speed signalling - it meant reduce speed to 125; other slower trains could treat it as a clear signal.

A point of clarity - flashing green would generally not be considered an example of speed signalling, which generally refers only to what the signalling is at junctions. The (fairly common) practice of having signals not at junctions impose speed limits does not seem to have a commonly accepted name, though some sources refer to it as "progressive speed signalling".

 

On a somewhat related note, while of course with the move to in-cab signalling it's probably too late, I wonder if the fact that the aspects for "clear over main route" and "clear over diverging route" are the same could be a contributing factor - if the aspect for the latter was different it would serve as a reminder to check the route indicator...

Link to post
Share on other sites

  • RMweb Premium

When going over to colour-light signalling was there ever a suggestion about having separate heads for separate routes at junctions? i.e. similar to semaphore arms - instead of one signal head with JI, you have individual signals which may be at different heights/lateral position according to the 'priority' of the routes.

I think Japan is one country which uses this.

Edited by keefer
Link to post
Share on other sites

17 hours ago, icn said:

there is absolutely no need to use AI in trains

I saw an excellent demonstration at Birmingham University of an AI trained system being used to monitor train bogies to detect problems. The system could detect and report 9 different issues simply by listening to the sounds made by wheels + bogies. Seemed an great use of the technology to me.

 

Yours, Mike.

  • Interesting/Thought-provoking 1
Link to post
Share on other sites

15 hours ago, Supaned said:

socio-economic impact of replacing a whole swathe of professionals

It all depends on what is being replaced. Humans are not very good at detecting infrequent abnormal situations where they are required to monitor stuff constantly. Computers never get bored or tired or go for a cuppa.

 

I saw a great demo of a system looking to detect security incidents like unattended bags on stations, based on the computers constantly watching video feeds. The system could detect humans (and other animals like dogs), bags and other inanimate stuff and was then trained to look for bags that got left unattended. The system was used to then alert human operators who could investigate further. Seems a great idea to offload the boring and tedious stuff to computers.

 

Yours, Mike.

  • Interesting/Thought-provoking 1
Link to post
Share on other sites

  • RMweb Premium

The problem with increasing automation and autonomy is when humans are still expected to act as the ultimate guarantor of safety if control systems fail. That assumes that a human who spends their working days staring out of the window, doing admin tasks, watching YouTube videos or whatever will retain the necessary skills and expertise to intervene in an emergency, a highly questionable assumption. I got a bit fed up of reviewing HAZID reports which fell back on operator intervention if systems go tango uniform.

  • Agree 4
Link to post
Share on other sites

  • RMweb Gold
1 hour ago, jjb1970 said:

The problem with increasing automation and autonomy is when humans are still expected to act as the ultimate guarantor of safety if control systems fail. That assumes that a human who spends their working days staring out of the window, doing admin tasks, watching YouTube videos or whatever will retain the necessary skills and expertise to intervene in an emergency, a highly questionable assumption. I got a bit fed up of reviewing HAZID reports which fell back on operator intervention if systems go tango uniform.

Exactly so - as a recent exoerience of mone proved.  A few weeks back my train to Exeter suffereda delay of 18 minutes because someine, or something, at TVSCC had allowed an empty stone train leave Theale with an inadequate margin,  Did a human err or not intervene because he was used to teh sytem running itself, or did a human intervene without sufficient thought about the margin available because the machine usually ran things and the human lacked exerience in making such a decision?

 

It's a mistake a human could have made when the area was still controlled by Reading panel but as the human was the normal regulating decision maker and should have known the margins would that mistake have happened back then?   A sort of Catch 22 develops when there is a high level of automation (or AL) anda human has to step in when teh equiment falls down but lacks the experience to do so.

 

Some years back I carried out a factory test ona new signalling control system prior to its intended introducytion into use at a particular power signal box.  I had to perform. various tasks observed by an ergonomics expert carrying ut all the usyal things a Signaller would do.   Great stuff, dead easy to work the screen displys usinga superb mouse specially designed for the job.

 

But then we came to an axle counter failure where I rapidly found that id f there was a multiple failure - where several trains could be involved - I could reset every axk le counter simultaneously with click of two mouse buttons - and AL system programmed in the same way would do exactly that.  The reset process was far too simple even if I applied everything in the Rule Book to teh process.  The system designers had looked at it from one direction but not from others and they had been led by somebody advising them about the Rule Book erequirements.  In other words teh system design had still hada human inpout and it was dloawed.

 

Interestingly  I also carried out an Independent Safety Assessment of parts of the specification for a heavily automated system which was being designed to control most of the WCML south of Crewe.  And I found a flaw in that specification which was readily identifiable as being based on inadequate information they had been given about Exceptional Load forms.   Again the problem was the human input because clearly the person who had given them the information,  and had presumably then checked what it meant in the design specification, obviously didn't properly understand the full range of ramifications of a train running under the authority of an Exceptional Load form.

 

Easy enough to correct the design spec and it showed the value of a properly informad ISA procedure.  But it also showed how things can go wrong when designing automated processes dealing with cmlex matters if you ask the wrong person.  and how do you know who is the right person?

 

SO I retain a degree of scepticism when anybody tells me it's a sample matter of looking at 'the rules' and incorporating them into an automated, let alone AI, system.

  • Like 2
  • Informative/Useful 2
Link to post
Share on other sites

23 hours ago, PaulRhB said:

Not following rules is definitely a major consideration but why they weren’t followed can be due to a multitude of reasons. I’ve even seen a RAIB report concede they did the right thing in not following a rule in the Esher incident in the early 2000’s because of a unique set of circumstances.

I presume it is this one: extension://bfdogplmndidlpjfhoijckpakkdjkkil/pdf/viewer.html?file=https%3A%2F%2Fwww.railwaysarchive.co.uk%2Fdocuments%2FRAIB_Esher2005.pdf

 

Signaller realised that if he put the signals back against a particular train as required by the rules, it would be more at risk of being struck by another one that had passed signal at danger due to poor adhesion.  

 

Another one here: Rail https://www.railwaysarchive.co.uk/docsummary.php?docID=4795

 

Driver did the right thing in reversing the train off an embankment that was being undermined by flooding, without consulting Control first.  

 

16 hours ago, keefer said:

When going over to colour-light signalling was there ever a suggestion about having separate heads for separate routes at junctions? i.e. similar to semaphore arms - instead of one signal head with JI, you have individual signals which may be at different heights/lateral position according to the 'priority' of the routes.

I think Japan is one country which uses this.

Two hazards here: a multiplicity of signal aspects close together can be difficult to discern from a distance, and if the individual signals have red aspects then the driver has pass a red signal, which is considered to degrade its importance in other situations (though it's allowed for subsidiary aspects and could be got round by only having one aspect, lit only when no proceed aspect was displayed).  Problems of signal sighting may also be worse if multiple signal heads need to be easily viewable somewhere where space for them is limited.  

  • Thanks 1
Link to post
Share on other sites

  • RMweb Premium
2 hours ago, The Stationmaster said:

Interestingly  I also carried out an Independent Safety Assessment of parts of the specification for a heavily automated system which was being designed to control most of the WCML south of Crewe.  And I found a flaw in that specification which was readily identifiable as being based on inadequate information they had been given about Exceptional Load forms.   Again the problem was the human input because clearly the person who had given them the information,  and had presumably then checked what it meant in the design specification, obviously didn't properly understand the full range of ramifications of a train running under the authority of an Exceptional Load form.

 

Easy enough to correct the design spec and it showed the value of a properly informad ISA procedure.  But it also showed how things can go wrong when designing automated processes dealing with cmlex matters if you ask the wrong person.  and how do you know who is the right person?

 

SO I retain a degree of scepticism when anybody tells me it's a sample matter of looking at 'the rules' and incorporating them into an automated, let alone AI, system.

 

This is the fundamental issue for any process, rubbish in will equal rubbish out. Where this can be most striking is in contracts, contractors are required to deliver whatever it is they've been contracted to deliver. If that is not what is needed then it shouldn't be the contractor which gets the blame (though it often is). Sometimes contractors play a deliberate game when they realise issues with a contract and bid low because they can see the variation orders from the dark side of the moon (especially prevalent with government work where bidding low and betting on variations is almost mandatory) but quite often suppliers will try and point out that what the customer is asking for isn't really what they need. Sometimes the customer will listen and work with suppliers, other times they'll tell suppliers to stay in their box and not get ideas beyond their station (again, it seems most prevalent with government work). If a customer needs 'A' but orders 'B' then finds out that 'B' doesn't work then they can't blame anyone else as it's their job to understand their needs. 

  • Agree 1
Link to post
Share on other sites

21 hours ago, keefer said:

When going over to colour-light signalling was there ever a suggestion about having separate heads for separate routes at junctions? i.e. similar to semaphore arms - instead of one signal head with JI, you have individual signals which may be at different heights/lateral position according to the 'priority' of the routes.

I think Japan is one country which uses this.

The normal American way of route signalling uses this method, but as "searchlight" signals and other methods of reducing the size of a signal head were commonly used the space taken up could be reduced, though columns of "traffic light" signals are becoming a standard. Looking at the rulebook for Metra's Rock Island line, a surviving example of a solely route signalled railroad, the way in which the old system of multiple semaphore signals was preserved with colour lights is clear:

Metra97SigAsp.jpg.2ab98b3c16d0cd035e4610d394f28458.jpg

Edited by eldomtom2
added detail
Link to post
Share on other sites

  • RMweb Gold
3 hours ago, Edwin_m said:

I presume it is this one: extension://bfdogplmndidlpjfhoijckpakkdjkkil/pdf/viewer.html?file=https%3A%2F%2Fwww.railwaysarchive.co.uk%2Fdocuments%2FRAIB_Esher2005.pdf

 

Signaller realised that if he put the signals back against a particular train as required by the rules, it would be more at risk of being struck by another one that had passed signal at danger due to poor adhesion.  


Yes, quoted further up in the original post a few lines further down 😉
 

On 10/06/2023 at 14:30, PaulRhB said:

Other rules were broken post the initial incident but noted the other factors that influenced the error, overall a very good report that has lead to at least one other Signaller keeping trains apart from reading it. https://assets.publishing.service.gov.uk/media/547c906640f0b602410001b3/R252006_070108_Part_1_Esher.pdf

 

  • Like 1
Link to post
Share on other sites

3 hours ago, PaulRhB said:


Yes, quoted further up in the original post a few lines further down 😉
 

 

Apologies, I was thinking of the second RAIB report I mentioned and got it into my head you hadn't linked to the first one.   

  • Like 1
Link to post
Share on other sites

20 hours ago, eldomtom2 said:

The normal American way of route signalling uses this method, but as "searchlight" signals and other methods of reducing the size of a signal head were commonly used the space taken up could be reduced, though columns of "traffic light" signals are becoming a standard. Looking at the rulebook for Metra's Rock Island line, a surviving example of a solely route signalled railroad, the way in which the old system of multiple semaphore signals was preserved with colour lights is clear:

Metra97SigAsp.jpg.2ab98b3c16d0cd035e4610d394f28458.jpg

Quite a lot to remember there; it shows the simplicity of the system in the UK. It also looks similar to the "speed signalling" on the Heaton Lodge Junction - Thornhill Junction section of line in the West Riding

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...