Jump to content
 

German Train Crash


phil-b259

Recommended Posts

Didn't Felix mention in an earlier post that the signalman can over rise the red aspect by using the flashing white light on the signal?

 

Somewhere in my collection I have a DB Signalbuch dating from about 1980 that my pal Dieter gave me. His partner is a signalman in one of the smaller boxes just outside Dusseldorf, controlling a set of carriage sidings and a depot; even he managed to get a unit with a major earth fault a few years ago. Just remembered, Wehrhahn I think in the name of his work place.

 

I think we should thank Felix for his input into this tragic thread. Danke schoen Felix!

Link to post
Share on other sites

Is it possible that the two trains were formed of a single rake and a pair?

 

Various sites including Drehscheibe list the vehicles involved as ET 355 (3 car unit) and ET325 (6 car unit). Carriage numbers for ET 355 are 39407 - 39409 and for ET 325 are 39595 - 39600.

 

http://www.drehscheibe-online.de/foren/read.php?32,4648677

 

Meridian website gives the numbers of seats as 158 and 333 respectively which would indicate that the 6 car unit is not just two permanently coupled 3 car units.

 

http://www.der-meridian.de/fahrzeuge-barrierefreiheit

 

Tony

Link to post
Share on other sites

  • RMweb Premium

I think at the moment the only thing we can say is something clearly went very badly wrong. I have no doubt the investigators will do a thorough job and establish what went wrong.

 

I don't really see the concept of absolute safety as being supportable, the only way you can make a train genuinely absolutely safe (or any other form of transport or process for that matter) is not to have a train. There are plenty of excellent risk analysis tools to demonstrate a system is safe (off the top of my head a few are event tree analysis, fault tree analysis, layer of protection analysis, failure modes and effects analysis, failure modes, effects and criticality analysis) but all of them have limitations and all are only as good as the input data. Software assurance is notoriously difficult for complex systems and it is not that uncommon to see incidents which we are assured could not happen, often for seemingly silly things like simultaneous time dependent software faults. I recenty got involved in an incident where a triplex control system for a very high hazard process crashed with potentially undesirable consequences because each of the three plc's had a common time dependent software fault that took all three out at the same time. The manufacturers had never foreseen such a possibility. And that is before going into the whole field of ergonomics, human factors and maintenance.

Link to post
Share on other sites

 

The UK had a particularly bad one at Abermule (Wales) back in 1921 on the Cambrian Railway. Cause - wrong single line staff given to driver by signalman, and driver never looked at it to verify it was correct. 2 simple but deadly co-inciding acts.

 

 

Brit15

Minor point but the staff has handed to the train crew by the relief stationmaster, because the instrument was kept in the station building.  The Cambrian Railways has some very risky practices.

 

Bill

Link to post
Share on other sites

  • RMweb Gold

I think at the moment the only thing we can say is something clearly went very badly wrong. I have no doubt the investigators will do a thorough job and establish what went wrong.

 

I don't really see the concept of absolute safety as being supportable, the only way you can make a train genuinely absolutely safe (or any other form of transport or process for that matter) is not to have a train. There are plenty of excellent risk analysis tools to demonstrate a system is safe (off the top of my head a few are event tree analysis, fault tree analysis, layer of protection analysis, failure modes and effects analysis, failure modes, effects and criticality analysis) but all of them have limitations and all are only as good as the input data. Software assurance is notoriously difficult for complex systems and it is not that uncommon to see incidents which we are assured could not happen, often for seemingly silly things like simultaneous time dependent software faults. I recenty got involved in an incident where a triplex control system for a very high hazard process crashed with potentially undesirable consequences because each of the three plc's had a common time dependent software fault that took all three out at the same time. The manufacturers had never foreseen such a possibility. And that is before going into the whole field of ergonomics, human factors and maintenance.

Or to put it a little more succinctly, you can't rely on computer-driven systems to fail safe every time they fail unless you stick some good old-fashioned electro-mechanical bits on the end.

 

The above tsunami of jargon illustrates beautifully the modern obsession with demonstrating "safety" through mathematics rather than practical effect. The computer model says the system is safe ergo it is safe, but just how safe is the computer model...........?

 

Maybe a little too much emphasis is nowadays being placed upon keeping trains moving and not enough on being able to stop them when necessary?

 

John.

Link to post
Share on other sites

English language online newspaper thelocal.de has updates on this and an explanation of the PZB system in English. sorry I can't copy the link across on my applemac! 

 
Link to post
Share on other sites

I think at the moment the only thing we can say is something clearly went very badly wrong. I have no doubt the investigators will do a thorough job and establish what went wrong.I don't really see the concept of absolute safety as being supportable, the only way you can make a train genuinely absolutely safe (or any other form of transport or process for that matter) is not to have a train. There are plenty of excellent risk analysis tools to demonstrate a system is safe (off the top of my head a few are event tree analysis, fault tree analysis, layer of protection analysis, failure modes and effects analysis, failure modes, effects and criticality analysis) but all of them have limitations and all are only as good as the input data. Software assurance is notoriously difficult for complex systems and it is not that uncommon to see incidents which we are assured could not happen, often for seemingly silly things like simultaneous time dependent software faults. I recenty got involved in an incident where a triplex control system for a very high hazard process crashed with potentially undesirable consequences because each of the three plc's had a common time dependent software fault that took all three out at the same time. The manufacturers had never foreseen such a possibility. And that is before going into the whole field of ergonomics, human factors and maintenance.

What does that mean in English?

Link to post
Share on other sites

  • RMweb Premium

Or to put it a little more succinctly, you can't rely on computer-driven systems to fail safe every time they fail unless you stick some good old-fashioned electro-mechanical bits on the end.

 

 

WRONG, you can rely on computer systems just as much as mechanical ones - both can have the same safety flaws and be as inheritable dangerous as the other.

 

Yes on a computer based system, the emphasis to make it safe is the programmer - if they miss something out and it is not picked up on then disaster could ensue. However a mechanical interlocking requires exactly the same of the designer - if they don't get all the lumps and notches right then you can just as easily end up with a big disaster.

 

People need to remember that humans have an inbuilt bias towards trusting things they can physically see / hear / touch and the brain can interpret as 'knowing what is happening'. The internal workings of computers, or indeed all electronics (being based on the movement electrons) is something we cannot have a direct sensory experience of (as compared to mechanical linkages or flowing liquids) and as such our brains have a hard time 'trusting' them.

 

 

We see this all the time with the preference of most people to ask a fellow human rather than trust the same information given by an electronic destination board despite both giving the same information.

 

In fact if you think about it electricity itself is very unnatural to humans - fire for example is something you can see and feel without needing to come into contact with it - this allowing our inbuilt relations to try and keep us safe from it - electricity displays no such clues and thus is something we are naturally suspicious of.

Link to post
Share on other sites

  • RMweb Premium

Phil, trust me, I have personal experience of electrons flowing, more then once actually. I really trust that it's there and does wot it says on the tin! :rolleyes: :lol:

Indeed - you are not a true railway technician till you have, ahem, 'experienced' 110V, or been accidentally insulation tested by your colleagues ;)

Link to post
Share on other sites

  • RMweb Premium

I thought if you were an SR man you had to try 750V?

I'm not that stupid / suicidal I will have you know! :no: The supplies you find in our grey location cabinets on the other hand (most of which have exposed 2BA nuts on cables terminations, transformer trappings, fuse holders, etc) can catch you out though - which is presumably why modern kit all has to be covered in protective plastic shields which are a pain in the arse when you need to slip links, etc

 

(Point detection and traditional signals use 110AC down our way while the capacitors used on AC 50Hz track circuit capacitors / links / bond coils can have 300V AC on them. Most other signalling circuits use 50V DC)

 

<edited because I forgot the similes and my tablet / work computer is being a pain over them>

Link to post
Share on other sites

I'm not that stupid / suicidal I will have you know! The supplies you find in our grey location cabinets on the other hand (most of which have exposed 2BA nuts on cables terminations, transformer trappings, fuse holders, etc) can catch you out though - which is presumably why modern kit all has to be covered in protective plastic shields which are a pain in the arse when you need to slip links, etc

 

(Point detection and traditional signals use 110AC down our way while the capacitors used on AC 50Hz track circuits can have 300V AC on them. Most other signalling circuits use 50V DC)

I know from your posts that you are anything but stupid! My first job with BR was developing rail steels and helping out with trials of different new grades. We had a visit from some SR track engineers demanding to know why they couldn't have some wear-resistant rail on their patch to try. My manager explained that he was worried about the safety of his staff taking measurements so close to the third rail. The response was ' Once you've had the juice a couple of times you get used to it.....' It has stuck with me over the years.

 

Seriously off topic, so apologies for that.

Link to post
Share on other sites

What does that mean in English?

It basically means there are various scientific techniques that can be used to reduce, but not eliminate, safety risks in complex systems. 

 

Recall that British Rail was a pioneer in Solid State Interlocking, basically the guts of a 1980s-vintage computer controlling the signals and points.  To guard against random glitches three of them operate in parallel and if one disagrees with the other two it is "assassinated" by blowing a fuse.  If the two survivors then disagree the system shuts down.  The software used, and the data to configure it to a particular track layout, has also undergone very detailed review and testing. 

 

A large part of our railway is now signalled by this equipment or more modern equivalents, which are now the standard technology for new schemes in most countries.  As far as I know there has been no accident caused by wrong-side failure of these systems, and by most measures Britain has the safest railway in Europe. 

Link to post
Share on other sites

  • RMweb Premium

It basically means there are various scientific techniques that can be used to reduce, but not eliminate, safety risks in complex systems. 

 

Recall that British Rail was a pioneer in Solid State Interlocking, basically the guts of a 1980s-vintage computer controlling the signals and points.  To guard against random glitches three of them operate in parallel and if one disagrees with the other two it is "assassinated" by blowing a fuse.  If the two survivors then disagree the system shuts down.  The software used, and the data to configure it to a particular track layout, has also undergone very detailed review and testing. 

 

A large part of our railway is now signalled by this equipment or more modern equivalents, which are now the standard technology for new schemes in most countries.  As far as I know there has been no accident caused by wrong-side failure of these systems, and by most measures Britain has the safest railway in Europe. 

 

There have however been programming issues. One was discovered recently at Tonbridge were something was missing from the data which IIRC (it came out in one of our briefings at work) allowed a set of points to move under a slow speed train movement - it was just fluke that the exact conditions necessary to highlight the deficiency had not occurred in the 20 odd years the SSI signalling had been installed (done in the early 90s in preparation for the Channel Tunnel). Another issue occurred just after the signalling at Milton Keynes had been renewed (when the extra fast line platform was built) where a train driver of a train waiting to leave from the south end saw his starting signal go from red to green then back to red moments after a fast Virgin train had passed on the line he was supposed to be routed onto. It turned out that a short track circuit had been left out of the route checking part for that signal when the program for the SSI was being written (and had not got spotted during verification) so as  soon as the signaller set the route the driver got a green signal - until the Virgin train hit the next track circuit which was in the programming and reverted the signal back to red.

 

However as I said in a previous post, EXACTLY the same can happen with a mechanical (or indeed relay based) interlocking - if the designer makes a mistake in specifying the notches / protrusions on a locking tray, or fails to wire up all the relays correctly the the end result (a potentially dangerous wring side failure) can easily occur.

 

Thus its the HUMAN element that is always the part of any system that is the week point - computer based or not.

Link to post
Share on other sites

  • RMweb Premium

WRONG, you can rely on computer systems just as much as mechanical ones - both can have the same safety flaws and be as inheritable dangerous as the other.

 

Yes on a computer based system, the emphasis to make it safe is the programmer - if they miss something out and it is not picked up on then disaster could ensue. However a mechanical interlocking requires exactly the same of the designer - if they don't get all the lumps and notches right then you can just as easily end up with a big disaster.

 

People need to remember that humans have an inbuilt bias towards trusting things they can physically see / hear / touch and the brain can interpret as 'knowing what is happening'. The internal workings of computers, or indeed all electronics (being based on the movement electrons) is something we cannot have a direct sensory experience of (as compared to mechanical linkages or flowing liquids) and as such our brains have a hard time 'trusting' them.

 

 

Experience suggests we certainly can reply on computer systems for signalling. However, I would strongly disagree that the types of flaws are the same and there is only a perceived difference due to irrational bias.

 

A mechanical interlocking operates in a very straightforward manner. It's relatively easy to see if all the lumps and notches have been got right, and if you modify one part of it, you can be fairly sure you haven't made another bit of it mysteriously fail unless you knocked off one of the "lumps". It's also fairly easy to check that it has the expected behaviour.

 

A computer based system is much, much, much more complex, with a complicated program running on a processor which itself is probably more complex than all the mechanical interlockings ever built put together. It can go wrong in complex ways a mechanical system just couldn't, with potential interactions between every part of the program and not just the current but also previous states of every other part. Now this doesn't mean that they aren't safe, but it means that proving that risk has been reduced to an acceptable limit is a much harder task, and it's nothing to do with whether you can see an electron or not.

 

There's a reason that safety critical computer operations generally use more than one computer, which have to agree with each other.

 

We see this all the time with the preference of most people to ask a fellow human rather than trust the same information given by an electronic destination board despite both giving the same information.

 

In fact if you think about it electricity itself is very unnatural to humans - fire for example is something you can see and feel without needing to come into contact with it - this allowing our inbuilt relations to try and keep us safe from it - electricity displays no such clues and thus is something we are naturally suspicious of.

 

Again, I think this is completely rational - electronic destination boards do not always give the same information and the human is MUCH more likely to be right. When did you ever her a human announce that a two coach DMU would split with the front two coaches going to one destination and the rear coaches going somewhere else?

 

Earlier this week I was waiting for a train at Barnham station. There were lots of cancellations. The platform display said that the next train wasn't scheduled to stop and we should all stand back. A 313 pulled in with "Portsmouth Harbour" on the front and the doors opened so I jumped on. If a human voice had announced that the train was not to be used then I would have stayed on the platform. Not because I understand how a human brain works and not a computer, but because the automated display was probably confused by all the cancellations while the platform staff probably know what's going on. The sign on the front of the train was also electronic but they are set by the crew so are more trustworthy.

Link to post
Share on other sites

  • RMweb Premium

Humans are better at dealing with the unusual compared to machines, and with being able to find information from unexpected sources, or go looking for it, but much more likely to get something routine wrong for no apparent reason. Both have strengths and weaknesses and always will.

Link to post
Share on other sites

  • RMweb Premium

Ironically we work using that 'invisible' electricity all the time, it makes our nerves and brains function.

Which is probably why we are still struggling

to fully understand how the human brain works.

Link to post
Share on other sites

  • RMweb Premium

Experience suggests we certainly can reply on computer systems for signalling. However, I would strongly disagree that the types of flaws are the same and there is only a perceived difference due to irrational bias.

 

A mechanical interlocking operates in a very straightforward manner. It's relatively easy to see if all the lumps and notches have been got right, and if you modify one part of it, you can be fairly sure you haven't made another bit of it mysteriously fail unless you knocked off one of the "lumps". It's also fairly easy to check that it has the expected behaviour.

 

A computer based system is much, much, much more complex, with a complicated program running on a processor which itself is probably more complex than all the mechanical interlockings ever built put together. It can go wrong in complex ways a mechanical system just couldn't, with potential interactions between every part of the program and not just the current but also previous states of every other part. Now this doesn't mean that they aren't safe, but it means that proving that risk has been reduced to an acceptable limit is a much harder task, and it's nothing to do with whether you can see an electron or not.

 

There's a reason that safety critical computer operations generally use more than one computer, which have to agree with each other.

 

 

Again, I think this is completely rational - electronic destination boards do not always give the same information and the human is MUCH more likely to be right. When did you ever her a human announce that a two coach DMU would split with the front two coaches going to one destination and the rear coaches going somewhere else?

 

Earlier this week I was waiting for a train at Barnham station. There were lots of cancellations. The platform display said that the next train wasn't scheduled to stop and we should all stand back. A 313 pulled in with "Portsmouth Harbour" on the front and the doors opened so I jumped on. If a human voice had announced that the train was not to be used then I would have stayed on the platform. Not because I understand how a human brain works and not a computer, but because the automated display was probably confused by all the cancellations while the platform staff probably know what's going on. The sign on the front of the train was also electronic but they are set by the crew so are more trustworthy.

 

You are still not getting the point. You say its easy to 'see' whether all the notches and bumps are in the right place - what if you are blind? Strictly you don't 'see' anything - instead you can use the sense of touch to interact with it and confirm that something is physically what you expect it to be. Human evolution is based on the ability to observe (in its broadest sense - not just using the eyes) the world around us - anything we cannot interact with is, as far as our brains thought process are concerned, something not to be trusted as we have no feedback to guide us. Were humans able to sense the movement of electrons in the way we can sense the flow of water (including its temperature, the pressures its under its colour, its smell, the sound as it passes over certain materials) then we would be far more trusting of electronic devices.

 

To return to railway matters though, you state that humans are better than a computerised destination board. Well thats nonsense if the humans responsible for the design, operation and maintenance do their job correctly. A member of station staff is not connected to the signalling system - an electronic destination board is (or can be). That destination board can know through the train describer / GPS EXACTLY where the train is PRECISELY whether its running to time and DEFINITIVELY if a route has been set for it to enter / leave a station platform. A Human being, if they are to convey the same information, has to check exactly the same things as the destination board - but is much slower at doing so and can only deal with one enquiry at a time. If the designer of the CIS system gets it wrong, them just like a human being looking at the wrong day in the timetable the customer will get duff information. Its a HUMAN that has screwed up NOT the computer which is doing what the system designer intended / forgot.

 

So if I turn to your recent experience at Barnham, what we have here is a breakdown of communication links rather than any proff that an electronic system is inherently any worse than a Human being. That 313 will have had a headcode and been in the train describer at Lancing / Arundell (which controls all the junctions at Ford) / Chichester / Littlehampton / Bognor signal boxes which will have describe what service it was and from that the customer information system (CIS) should have been able to tell you how late it was and where it was headed. When the train moved into the area controlled by Barnham signal box the data should have moved with it and the CIS updated accordingly. However lets imagine* the train describer equipment was faulty and headcodes were not stepping across signalling boundaries. In such a case the CIS at Barnham would not necessarily have known what service was approaching and consequently issued the 'stand back' announcement as a result.

 

(If you let me know the exact date and approximate time I can go back and have a look at the control logs to see what was the issue)

Link to post
Share on other sites

The chances are, if you ask a member of platform staff, they will look at the screen anyway...

I think the point wasn't that a computer or a person would be more correct, but a human will trust a person more, whether that is the best thing to do or not.

Link to post
Share on other sites

  • RMweb Premium

The chances are, if you ask a member of platform staff, they will look at the screen anyway...

I think the point wasn't that a computer or a person would be more correct, but a human will trust a person more, whether that is the best thing to do or not.

The member of staff is more likely to know which screen to look at, which might be useful if you're at a large unfamiliar station in a hurry.

Link to post
Share on other sites

  • RMweb Gold

The member of staff is more likely to know which screen to look at, which might be useful if you're at a large unfamiliar station in a hurry.

 

They are also likely to have looked at the screen many times before. They will be able to tell you that when it says that the 17:42 is "on time" it is lying. And when if it says it is going into platform 2 don't rely on it. If the 17:33 is running late, which it does more often than not, it's likely to change to platform 3 at the last minute.

 

No automated system can give you this level of information, but for a human brain it is a walk in the park.

 

Martin.

Link to post
Share on other sites

It's basically the Garbage In Garbage Out problem.  The human that designed the system didn't foresee a particular situation, or the human with the correct information was unable to update the system because they didn't have access to it (design flaw) or they were engaged in some other task that took priority (failure to understand process or simple management decision?), or simply forgot (human error). 

Link to post
Share on other sites

  • RMweb Premium

You are still not getting the point. You say its easy to 'see' whether all the notches and bumps are in the right place - what if you are blind? Strictly you don't 'see' anything - instead you can use the sense of touch to interact with it and confirm that something is physically what you expect it to be. Human evolution is based on the ability to observe (in its broadest sense - not just using the eyes) the world around us - anything we cannot interact with is, as far as our brains thought process are concerned, something not to be trusted as we have no feedback to guide us. Were humans able to sense the movement of electrons in the way we can sense the flow of water (including its temperature, the pressures its under its colour, its smell, the sound as it passes over certain materials) then we would be far more trusting of electronic devices.

 

I think we're at cross-purposes here and failling to get each other's points... What I'm talking about is the huge difference in complexity.

 

It's easy to "see" that notches are in the right place because a given notch interacts only with the mechanism near it. It also operates using fairly straightforward rules of physics e.g. two bits of metal can't be in the same place at once, and has very little capacity to "remember" what was going on previously.

 

A computer program is much more complicated and any "bit" of the program can be affected by other bits now or how they were at previous times. It also runs on a processor which itself is extremely complex and could contain bugs. There is a lot more scope for odd things to happen that weren't predicted in a computer program than in mechanisms. Even if this were all laid out in some fashion we could see or touch it wouldn't change things much.

 

 

To return to railway matters though, you state that humans are better than a computerised destination board. Well thats nonsense if the humans responsible for the design, operation and maintenance do their job correctly. A member of station staff is not connected to the signalling system - an electronic destination board is (or can be). That destination board can know through the train describer / GPS EXACTLY where the train is PRECISELY whether its running to time and DEFINITIVELY if a route has been set for it to enter / leave a station platform. A Human being, if they are to convey the same information, has to check exactly the same things as the destination board - but is much slower at doing so and can only deal with one enquiry at a time. If the designer of the CIS system gets it wrong, them just like a human being looking at the wrong day in the timetable the customer will get duff information. Its a HUMAN that has screwed up NOT the computer which is doing what the system designer intended / forgot.

 

 
In principle, perhaps. But in practise, I know as a frequent rail traveller that automatic information is often completely wrong, especially when there is disruption. Now it may well be that the system is doing exactly what it was designed to do and people aren't feeding the correct information in properly. But that doesn't change things. Currently, automated systems have no "common sense" and will happily give out rubbish if that's what they've been told, whereas humans have the ability to actually think.
 
Example: 3 coach train leaving Cardiff should drop off 1 coach at Westbury. Due to disruption it's replaced by a 2 coach DMU. The display says "First TWO coaches to Portsmouth Harbour, rear COACHES to Westbury". Utter nonsense, and a human announcer would know that (and that a 150 isn't going to split in the middle).
 
Maybe in principle we could have perfect automated systems that never give out misinformation and that would be lovely. But it's not what we have now and I'll take a member of station staff in touch with control and a view of the signalling diagram any day.
 
As a frequent rail traveller, I learnt a long time ago to take any computerised announcement or display with a pinch of salt and to pay much more attention to what a human tells me.

 

I guess to summarise, we have a computerised system with a direct connection to its inputs, which won't make a silly mistake like looking up the wrong day's timetable, but has no ability to spot when it's been fed misinformation. Then we have humans who have an indirect connection to the inputs via a computer screen, could make a silly mistake, but can talk to signallers/control AND have the ability to use common sense to understand what's going on. Now it doesn't mean they always give the correct information, but it's a lot more likely to be than anything from a computer - in my experience.

 

So if I turn to your recent experience at Barnham, what we have here is a breakdown of communication links rather than any proff that an electronic system is inherently any worse than a Human being. That 313 will have had a headcode and been in the train describer at Lancing / Arundell (which controls all the junctions at Ford) / Chichester / Littlehampton / Bognor signal boxes which will have describe what service it was and from that the customer information system (CIS) should have been able to tell you how late it was and where it was headed. When the train moved into the area controlled by Barnham signal box the data should have moved with it and the CIS updated accordingly. However lets imagine* the train describer equipment was faulty and headcodes were not stepping across signalling boundaries. In such a case the CIS at Barnham would not necessarily have known what service was approaching and consequently issued the 'stand back' announcement as a result.

 

(If you let me know the exact date and approximate time I can go back and have a look at the control logs to see what was the issue)

 

Happily. Barnham station, 8th February, around 17:50. Displays at Barnham *and* Chichester told us to stand back as the next train wasn't scheduled to stop, but it was actually a 313 for Portsmouth Harbour. Seemed to get sorted out a few stops later.

 

And - while I'm here - the 7:46 from Horsham to Billingshurst routinely arrives in Horsham and an automated announcement tells me to stand back as the train is about to depart...before the doors have even released. None of this makes me have much belief that I should pay attention to automated announcements.

Link to post
Share on other sites

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...